Feb 17 00:08:48 crc systemd[1]: Starting Kubernetes Kubelet... Feb 17 00:08:49 crc kubenswrapper[5109]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 00:08:49 crc kubenswrapper[5109]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 17 00:08:49 crc kubenswrapper[5109]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 00:08:49 crc kubenswrapper[5109]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 00:08:49 crc kubenswrapper[5109]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 17 00:08:49 crc kubenswrapper[5109]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.138487 5109 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144371 5109 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144397 5109 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144404 5109 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144410 5109 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144416 5109 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144423 5109 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144429 5109 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144435 5109 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144441 5109 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144447 5109 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144454 5109 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144460 5109 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144466 5109 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144471 5109 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144477 5109 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144483 5109 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144488 5109 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144494 5109 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144500 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144505 5109 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144510 5109 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144521 5109 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144530 5109 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144536 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144542 5109 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144548 5109 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144554 5109 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144560 5109 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144566 5109 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144572 5109 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144577 5109 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144583 5109 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144588 5109 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144648 5109 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144654 5109 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144660 5109 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144665 5109 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144672 5109 feature_gate.go:328] unrecognized feature gate: Example Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144679 5109 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144687 5109 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144693 5109 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144699 5109 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144706 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144712 5109 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144718 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144724 5109 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144729 5109 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144735 5109 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144741 5109 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144747 5109 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144752 5109 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144758 5109 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144763 5109 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144769 5109 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144776 5109 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144783 5109 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144789 5109 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144794 5109 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144800 5109 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144806 5109 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144811 5109 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144817 5109 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144823 5109 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144828 5109 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144833 5109 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144838 5109 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144843 5109 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144848 5109 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144854 5109 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144859 5109 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144864 5109 feature_gate.go:328] unrecognized feature gate: Example2 Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144870 5109 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144876 5109 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144889 5109 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144895 5109 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144901 5109 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144907 5109 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144912 5109 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144918 5109 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144926 5109 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144936 5109 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144942 5109 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144948 5109 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144953 5109 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144959 5109 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.144965 5109 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145632 5109 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145645 5109 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145651 5109 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145656 5109 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145663 5109 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145668 5109 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145674 5109 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145682 5109 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145689 5109 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145696 5109 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145702 5109 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145707 5109 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145713 5109 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145719 5109 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145725 5109 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145731 5109 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145736 5109 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145742 5109 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145748 5109 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145758 5109 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145764 5109 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145770 5109 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145776 5109 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145782 5109 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145790 5109 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145798 5109 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145803 5109 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145810 5109 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145815 5109 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145821 5109 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145828 5109 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145833 5109 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145839 5109 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145846 5109 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145852 5109 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145858 5109 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145864 5109 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145869 5109 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145875 5109 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145881 5109 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145889 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145895 5109 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145901 5109 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145906 5109 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145912 5109 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145918 5109 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145924 5109 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145930 5109 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145935 5109 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145940 5109 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145946 5109 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145955 5109 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145960 5109 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145966 5109 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145972 5109 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145978 5109 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145984 5109 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145990 5109 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.145996 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.146002 5109 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.146008 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.146014 5109 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.146019 5109 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.146025 5109 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.146031 5109 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.146036 5109 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.146044 5109 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.146050 5109 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.146056 5109 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.146061 5109 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.146067 5109 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.146073 5109 feature_gate.go:328] unrecognized feature gate: Example2 Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.146079 5109 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.146085 5109 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.146093 5109 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.146099 5109 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.146105 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.146110 5109 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.146116 5109 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.146122 5109 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.146127 5109 feature_gate.go:328] unrecognized feature gate: Example Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.146133 5109 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.146139 5109 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.146147 5109 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.146153 5109 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.146159 5109 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147037 5109 flags.go:64] FLAG: --address="0.0.0.0" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147057 5109 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147100 5109 flags.go:64] FLAG: --anonymous-auth="true" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147109 5109 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147118 5109 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147125 5109 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147133 5109 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147143 5109 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147149 5109 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147156 5109 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147162 5109 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147170 5109 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147176 5109 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147193 5109 flags.go:64] FLAG: --cgroup-root="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147199 5109 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147207 5109 flags.go:64] FLAG: --client-ca-file="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147213 5109 flags.go:64] FLAG: --cloud-config="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147220 5109 flags.go:64] FLAG: --cloud-provider="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147227 5109 flags.go:64] FLAG: --cluster-dns="[]" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147239 5109 flags.go:64] FLAG: --cluster-domain="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147245 5109 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147251 5109 flags.go:64] FLAG: --config-dir="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147258 5109 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147264 5109 flags.go:64] FLAG: --container-log-max-files="5" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147272 5109 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147278 5109 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147284 5109 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147291 5109 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147298 5109 flags.go:64] FLAG: --contention-profiling="false" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147308 5109 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147314 5109 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147321 5109 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147327 5109 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147335 5109 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147342 5109 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147347 5109 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147353 5109 flags.go:64] FLAG: --enable-load-reader="false" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147359 5109 flags.go:64] FLAG: --enable-server="true" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147365 5109 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147379 5109 flags.go:64] FLAG: --event-burst="100" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147385 5109 flags.go:64] FLAG: --event-qps="50" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147391 5109 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147397 5109 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147403 5109 flags.go:64] FLAG: --eviction-hard="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147411 5109 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147417 5109 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147442 5109 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147449 5109 flags.go:64] FLAG: --eviction-soft="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147455 5109 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147461 5109 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147468 5109 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147474 5109 flags.go:64] FLAG: --experimental-mounter-path="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147480 5109 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147487 5109 flags.go:64] FLAG: --fail-swap-on="true" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147492 5109 flags.go:64] FLAG: --feature-gates="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147500 5109 flags.go:64] FLAG: --file-check-frequency="20s" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147506 5109 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147512 5109 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147519 5109 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147527 5109 flags.go:64] FLAG: --healthz-port="10248" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147533 5109 flags.go:64] FLAG: --help="false" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147544 5109 flags.go:64] FLAG: --hostname-override="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147551 5109 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147556 5109 flags.go:64] FLAG: --http-check-frequency="20s" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147562 5109 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147568 5109 flags.go:64] FLAG: --image-credential-provider-config="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147574 5109 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147580 5109 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147586 5109 flags.go:64] FLAG: --image-service-endpoint="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147613 5109 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147626 5109 flags.go:64] FLAG: --kube-api-burst="100" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147632 5109 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147638 5109 flags.go:64] FLAG: --kube-api-qps="50" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147645 5109 flags.go:64] FLAG: --kube-reserved="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147650 5109 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147656 5109 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147662 5109 flags.go:64] FLAG: --kubelet-cgroups="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147668 5109 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147674 5109 flags.go:64] FLAG: --lock-file="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147688 5109 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147694 5109 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147700 5109 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147710 5109 flags.go:64] FLAG: --log-json-split-stream="false" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147716 5109 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147722 5109 flags.go:64] FLAG: --log-text-split-stream="false" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147728 5109 flags.go:64] FLAG: --logging-format="text" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147734 5109 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147740 5109 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147746 5109 flags.go:64] FLAG: --manifest-url="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147752 5109 flags.go:64] FLAG: --manifest-url-header="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147760 5109 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147766 5109 flags.go:64] FLAG: --max-open-files="1000000" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147775 5109 flags.go:64] FLAG: --max-pods="110" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147784 5109 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147790 5109 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147796 5109 flags.go:64] FLAG: --memory-manager-policy="None" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147802 5109 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147808 5109 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147814 5109 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147821 5109 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147835 5109 flags.go:64] FLAG: --node-status-max-images="50" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147841 5109 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147847 5109 flags.go:64] FLAG: --oom-score-adj="-999" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147853 5109 flags.go:64] FLAG: --pod-cidr="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147861 5109 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147872 5109 flags.go:64] FLAG: --pod-manifest-path="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147877 5109 flags.go:64] FLAG: --pod-max-pids="-1" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147883 5109 flags.go:64] FLAG: --pods-per-core="0" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147889 5109 flags.go:64] FLAG: --port="10250" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147895 5109 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147901 5109 flags.go:64] FLAG: --provider-id="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147907 5109 flags.go:64] FLAG: --qos-reserved="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147922 5109 flags.go:64] FLAG: --read-only-port="10255" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147928 5109 flags.go:64] FLAG: --register-node="true" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147934 5109 flags.go:64] FLAG: --register-schedulable="true" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147940 5109 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147952 5109 flags.go:64] FLAG: --registry-burst="10" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147958 5109 flags.go:64] FLAG: --registry-qps="5" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147964 5109 flags.go:64] FLAG: --reserved-cpus="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147970 5109 flags.go:64] FLAG: --reserved-memory="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147978 5109 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147984 5109 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147992 5109 flags.go:64] FLAG: --rotate-certificates="false" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.147997 5109 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.148003 5109 flags.go:64] FLAG: --runonce="false" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.148010 5109 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.148023 5109 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.148030 5109 flags.go:64] FLAG: --seccomp-default="false" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.148036 5109 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.148042 5109 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.148048 5109 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.148055 5109 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.148061 5109 flags.go:64] FLAG: --storage-driver-password="root" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.148066 5109 flags.go:64] FLAG: --storage-driver-secure="false" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.148072 5109 flags.go:64] FLAG: --storage-driver-table="stats" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.148079 5109 flags.go:64] FLAG: --storage-driver-user="root" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.148085 5109 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.148092 5109 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.148097 5109 flags.go:64] FLAG: --system-cgroups="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.148103 5109 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.148113 5109 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.148119 5109 flags.go:64] FLAG: --tls-cert-file="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.148125 5109 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.148139 5109 flags.go:64] FLAG: --tls-min-version="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.148145 5109 flags.go:64] FLAG: --tls-private-key-file="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.148163 5109 flags.go:64] FLAG: --topology-manager-policy="none" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.148174 5109 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.148180 5109 flags.go:64] FLAG: --topology-manager-scope="container" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.148186 5109 flags.go:64] FLAG: --v="2" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.148194 5109 flags.go:64] FLAG: --version="false" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.148202 5109 flags.go:64] FLAG: --vmodule="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.148211 5109 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.148218 5109 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148412 5109 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148420 5109 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148427 5109 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148433 5109 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148439 5109 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148456 5109 feature_gate.go:328] unrecognized feature gate: Example Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148462 5109 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148467 5109 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148473 5109 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148479 5109 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148487 5109 feature_gate.go:328] unrecognized feature gate: Example2 Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148493 5109 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148499 5109 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148505 5109 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148510 5109 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148516 5109 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148521 5109 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148526 5109 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148532 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148537 5109 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148543 5109 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148549 5109 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148554 5109 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148560 5109 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148565 5109 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148587 5109 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148636 5109 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148642 5109 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148648 5109 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148653 5109 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148659 5109 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148666 5109 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148671 5109 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148679 5109 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148685 5109 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148691 5109 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148697 5109 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148706 5109 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148711 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148717 5109 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148722 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148728 5109 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148733 5109 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148738 5109 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148743 5109 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148748 5109 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148754 5109 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148760 5109 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148766 5109 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148773 5109 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148783 5109 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148789 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148795 5109 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148801 5109 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148807 5109 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148813 5109 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148818 5109 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148827 5109 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148842 5109 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148847 5109 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148853 5109 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148858 5109 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148864 5109 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148869 5109 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148874 5109 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148887 5109 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148892 5109 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148898 5109 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148903 5109 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148911 5109 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148916 5109 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148921 5109 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148927 5109 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148933 5109 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148938 5109 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148943 5109 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148948 5109 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148953 5109 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148959 5109 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148964 5109 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148970 5109 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148975 5109 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148981 5109 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148987 5109 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148992 5109 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.148998 5109 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.149013 5109 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.166986 5109 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.167021 5109 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167104 5109 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167116 5109 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167126 5109 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167132 5109 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167138 5109 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167144 5109 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167151 5109 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167156 5109 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167162 5109 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167167 5109 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167172 5109 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167179 5109 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167184 5109 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167190 5109 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167196 5109 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167202 5109 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167207 5109 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167214 5109 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167220 5109 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167225 5109 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167231 5109 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167236 5109 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167242 5109 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167247 5109 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167253 5109 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167259 5109 feature_gate.go:328] unrecognized feature gate: Example2 Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167264 5109 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167270 5109 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167275 5109 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167280 5109 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167286 5109 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167291 5109 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167297 5109 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167303 5109 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167309 5109 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167316 5109 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167323 5109 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167330 5109 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167336 5109 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167342 5109 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167347 5109 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167353 5109 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167358 5109 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167379 5109 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167386 5109 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167391 5109 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167397 5109 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167402 5109 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167408 5109 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167413 5109 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167418 5109 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167424 5109 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167429 5109 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167435 5109 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167441 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167446 5109 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167452 5109 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167457 5109 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167467 5109 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167473 5109 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167478 5109 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167484 5109 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167489 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167494 5109 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167500 5109 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167505 5109 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167510 5109 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167516 5109 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167521 5109 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167526 5109 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167532 5109 feature_gate.go:328] unrecognized feature gate: Example Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167537 5109 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167543 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167548 5109 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167554 5109 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167559 5109 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167574 5109 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167579 5109 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167587 5109 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167614 5109 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167620 5109 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167626 5109 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167631 5109 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167636 5109 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167642 5109 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167647 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.167657 5109 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167831 5109 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167843 5109 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167849 5109 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167856 5109 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167862 5109 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167868 5109 feature_gate.go:328] unrecognized feature gate: Example2 Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167873 5109 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167880 5109 feature_gate.go:328] unrecognized feature gate: Example Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167885 5109 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167891 5109 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167896 5109 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167902 5109 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167907 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167913 5109 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167918 5109 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167923 5109 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167929 5109 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167934 5109 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167939 5109 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167945 5109 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167950 5109 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167956 5109 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167965 5109 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167974 5109 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167979 5109 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167984 5109 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167989 5109 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.167995 5109 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168000 5109 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168006 5109 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168011 5109 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168017 5109 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168023 5109 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168029 5109 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168034 5109 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168041 5109 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168047 5109 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168054 5109 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168059 5109 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168065 5109 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168071 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168076 5109 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168081 5109 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168087 5109 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168093 5109 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168101 5109 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168108 5109 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168113 5109 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168119 5109 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168125 5109 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168131 5109 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168136 5109 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168142 5109 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168148 5109 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168154 5109 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168162 5109 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168168 5109 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168174 5109 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168179 5109 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168185 5109 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168190 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168196 5109 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168202 5109 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168207 5109 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168213 5109 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168220 5109 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168226 5109 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168232 5109 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168238 5109 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168243 5109 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168249 5109 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168255 5109 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168260 5109 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168266 5109 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168271 5109 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168276 5109 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168282 5109 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168287 5109 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168293 5109 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168298 5109 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168304 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168310 5109 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168316 5109 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168321 5109 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168327 5109 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.168333 5109 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.168343 5109 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.168578 5109 server.go:962] "Client rotation is on, will bootstrap in background" Feb 17 00:08:49 crc kubenswrapper[5109]: E0217 00:08:49.174488 5109 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.178128 5109 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.178265 5109 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.179586 5109 server.go:1019] "Starting client certificate rotation" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.179844 5109 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.179939 5109 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.210745 5109 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.215361 5109 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 00:08:49 crc kubenswrapper[5109]: E0217 00:08:49.216469 5109 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.235998 5109 log.go:25] "Validated CRI v1 runtime API" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.291020 5109 log.go:25] "Validated CRI v1 image API" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.293387 5109 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.300390 5109 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2026-02-17-00-02-11-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.300448 5109 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:46 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.335023 5109 manager.go:217] Machine: {Timestamp:2026-02-17 00:08:49.331723861 +0000 UTC m=+0.663278659 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33649926144 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:85fb0ff0-40b9-49c9-951f-8aba64a9d9fd BootID:351c3bed-1bde-4016-be98-c82504203bf7 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:46 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824963072 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:1f:08:5d Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:1f:08:5d Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:0f:24:5e Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:25:7d:ea Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:1d:08:b0 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:f8:84:b3 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:42:a6:c6:0d:4e:22 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:a6:8f:ec:a0:a2:bc Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649926144 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.335369 5109 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.335573 5109 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.338135 5109 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.338185 5109 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.338482 5109 topology_manager.go:138] "Creating topology manager with none policy" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.338498 5109 container_manager_linux.go:306] "Creating device plugin manager" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.338531 5109 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.338562 5109 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.339736 5109 state_mem.go:36] "Initialized new in-memory state store" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.339955 5109 server.go:1267] "Using root directory" path="/var/lib/kubelet" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.344021 5109 kubelet.go:491] "Attempting to sync node with API server" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.344068 5109 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.344100 5109 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.344119 5109 kubelet.go:397] "Adding apiserver pod source" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.344144 5109 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.350202 5109 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.350252 5109 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Feb 17 00:08:49 crc kubenswrapper[5109]: E0217 00:08:49.350918 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 17 00:08:49 crc kubenswrapper[5109]: E0217 00:08:49.350931 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.353235 5109 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.353271 5109 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.358566 5109 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.359092 5109 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.360045 5109 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.361291 5109 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.361336 5109 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.361352 5109 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.361368 5109 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.361382 5109 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.361408 5109 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.361423 5109 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.361437 5109 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.361455 5109 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.361480 5109 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.361512 5109 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.362066 5109 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.363194 5109 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.363225 5109 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.365123 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.387762 5109 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.387845 5109 server.go:1295] "Started kubelet" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.388197 5109 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.388203 5109 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.388526 5109 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.389647 5109 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 17 00:08:49 crc systemd[1]: Started Kubernetes Kubelet. Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.391023 5109 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.391166 5109 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.395995 5109 server.go:317] "Adding debug handlers to kubelet server" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.398306 5109 volume_manager.go:295] "The desired_state_of_world populator starts" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.398338 5109 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 17 00:08:49 crc kubenswrapper[5109]: E0217 00:08:49.398713 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.398795 5109 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Feb 17 00:08:49 crc kubenswrapper[5109]: E0217 00:08:49.399772 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="200ms" Feb 17 00:08:49 crc kubenswrapper[5109]: E0217 00:08:49.399974 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.410424 5109 factory.go:153] Registering CRI-O factory Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.410517 5109 factory.go:223] Registration of the crio container factory successfully Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.410682 5109 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.410699 5109 factory.go:55] Registering systemd factory Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.410711 5109 factory.go:223] Registration of the systemd container factory successfully Feb 17 00:08:49 crc kubenswrapper[5109]: E0217 00:08:49.403575 5109 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.199:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894e013dd73b89d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:49.387796637 +0000 UTC m=+0.719351395,LastTimestamp:2026-02-17 00:08:49.387796637 +0000 UTC m=+0.719351395,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.410744 5109 factory.go:103] Registering Raw factory Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.410817 5109 manager.go:1196] Started watching for new ooms in manager Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.411479 5109 manager.go:319] Starting recovery of all containers Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.431958 5109 manager.go:324] Recovery completed Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.451855 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.454387 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.454470 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.454506 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.460040 5109 cpu_manager.go:222] "Starting CPU manager" policy="none" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.460068 5109 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.460090 5109 state_mem.go:36] "Initialized new in-memory state store" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.460411 5109 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.463128 5109 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.463196 5109 status_manager.go:230] "Starting to sync pod status with apiserver" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.463231 5109 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.463245 5109 kubelet.go:2451] "Starting kubelet main sync loop" Feb 17 00:08:49 crc kubenswrapper[5109]: E0217 00:08:49.463383 5109 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 17 00:08:49 crc kubenswrapper[5109]: E0217 00:08:49.464775 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.470733 5109 policy_none.go:49] "None policy: Start" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.470779 5109 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.470802 5109 state_mem.go:35] "Initializing new in-memory state store" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488398 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488455 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488466 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488476 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488486 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488495 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488504 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488513 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488522 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488530 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488539 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488547 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488555 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488564 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488577 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488586 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488618 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488629 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488637 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488645 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488652 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488661 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488670 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488678 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488689 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488698 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488707 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488717 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488733 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488743 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488753 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488762 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488772 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488781 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488791 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488801 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488810 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488819 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488829 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488838 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488847 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488856 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488866 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488877 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488887 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488898 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488908 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488917 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488926 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488935 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488944 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488953 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488961 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488971 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488980 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.488989 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489003 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489011 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489019 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489028 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489037 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489045 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489053 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489060 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489069 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489078 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489085 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489092 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489100 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489108 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489116 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489124 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489133 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489140 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489149 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489158 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489165 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489174 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489182 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489191 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489200 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489208 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489217 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489225 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489234 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489244 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489258 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489269 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489280 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489299 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489310 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489322 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489333 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489345 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489358 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489371 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489385 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489398 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489409 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489418 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489428 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489438 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.489454 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.490210 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.490222 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.490232 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.490242 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.490251 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.490263 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.490271 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.490280 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.490290 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.490311 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.490319 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.490327 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.490336 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.490344 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.490354 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.490365 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.490375 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.490383 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.490393 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.490401 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.490411 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.490421 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.490430 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.490437 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.490447 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.490458 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.490467 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.490478 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.490488 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.490497 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.490506 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.490515 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.492681 5109 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.492704 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.492715 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.492723 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.492732 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.492741 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.492750 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.492758 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.492768 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.492836 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.492850 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.492859 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.492870 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.492879 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.492890 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.492900 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.492909 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.492918 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.492928 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.492939 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.492949 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.492958 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.492967 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.492976 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.492986 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.492999 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493007 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493017 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493026 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493035 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493046 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493055 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493064 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493073 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493083 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493092 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493101 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493109 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493121 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493130 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493139 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493150 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493158 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493168 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493176 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493186 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493195 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493204 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493214 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493224 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493233 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493248 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493256 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493266 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493303 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493313 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493323 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493333 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493342 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493352 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493361 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493370 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493379 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493389 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493398 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493407 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493416 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493460 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493472 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493508 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493518 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493529 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493540 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493551 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493561 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493572 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493615 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493630 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493643 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493655 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493719 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493734 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493757 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493767 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493777 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493786 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493795 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493804 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493813 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493822 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493864 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493874 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493883 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493894 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493903 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493912 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493923 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493934 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493943 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493966 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493977 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.493990 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.494001 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.494011 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.494020 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.494030 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.494041 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.494051 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.494060 5109 reconstruct.go:97] "Volume reconstruction finished" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.494066 5109 reconciler.go:26] "Reconciler: start to sync state" Feb 17 00:08:49 crc kubenswrapper[5109]: E0217 00:08:49.498982 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.529963 5109 manager.go:341] "Starting Device Plugin manager" Feb 17 00:08:49 crc kubenswrapper[5109]: E0217 00:08:49.530756 5109 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.530780 5109 server.go:85] "Starting device plugin registration server" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.531222 5109 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.531246 5109 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.531506 5109 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.531617 5109 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.531631 5109 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 17 00:08:49 crc kubenswrapper[5109]: E0217 00:08:49.534917 5109 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Feb 17 00:08:49 crc kubenswrapper[5109]: E0217 00:08:49.534974 5109 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.564451 5109 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.564846 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.565912 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.565974 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.565985 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.566871 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.567078 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.567154 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.567893 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.567950 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.567969 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.567986 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.568007 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.567991 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.568873 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.569066 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.569134 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.569386 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.569410 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.569423 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.569806 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.569853 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.569872 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.570133 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.570267 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.570338 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.571219 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.571225 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.571287 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.571298 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.571263 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.571366 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.572117 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.572266 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.572310 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.572774 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.572826 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.572847 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.572849 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.572999 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.573015 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.574058 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.574114 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.575340 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.575372 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.575382 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:49 crc kubenswrapper[5109]: E0217 00:08:49.602065 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="400ms" Feb 17 00:08:49 crc kubenswrapper[5109]: E0217 00:08:49.610114 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:08:49 crc kubenswrapper[5109]: E0217 00:08:49.616007 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:08:49 crc kubenswrapper[5109]: E0217 00:08:49.621353 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.633387 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.634383 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.634426 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.634437 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.634467 5109 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 17 00:08:49 crc kubenswrapper[5109]: E0217 00:08:49.635069 5109 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.199:6443: connect: connection refused" node="crc" Feb 17 00:08:49 crc kubenswrapper[5109]: E0217 00:08:49.641475 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:08:49 crc kubenswrapper[5109]: E0217 00:08:49.645464 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.696882 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.696928 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.697384 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.697410 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.697429 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.697445 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.697462 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.697477 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.697499 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.697516 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.697535 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.697554 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.697570 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.697586 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.697614 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.697628 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.697643 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.697658 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.697693 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.697709 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.697725 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.697741 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.698120 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.698227 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.698664 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.698688 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.698665 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.698700 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.698755 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.699690 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.799495 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.799606 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.799636 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.799695 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.799721 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.799797 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.799821 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.799718 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.799801 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.799856 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.799934 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.799954 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.799951 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.799976 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.799999 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.800022 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.800029 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.799905 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.800070 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.800092 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.800153 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.800184 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.800203 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.800205 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.800221 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.800163 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.800284 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.800307 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.800331 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.800357 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.800355 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.800486 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.836118 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.837320 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.837425 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.837455 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.837509 5109 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 17 00:08:49 crc kubenswrapper[5109]: E0217 00:08:49.838337 5109 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.199:6443: connect: connection refused" node="crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.912349 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.917216 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.922824 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.942503 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.945797 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.991174 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-a6a84a879b16f279015fa86d4278549af0fdc67867789b6efe992d2c85298446 WatchSource:0}: Error finding container a6a84a879b16f279015fa86d4278549af0fdc67867789b6efe992d2c85298446: Status 404 returned error can't find the container with id a6a84a879b16f279015fa86d4278549af0fdc67867789b6efe992d2c85298446 Feb 17 00:08:49 crc kubenswrapper[5109]: W0217 00:08:49.992631 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-a20c1929fdd0351d99b6ce4f1b337d034784bef16f948325aa8967cd5c577d6c WatchSource:0}: Error finding container a20c1929fdd0351d99b6ce4f1b337d034784bef16f948325aa8967cd5c577d6c: Status 404 returned error can't find the container with id a20c1929fdd0351d99b6ce4f1b337d034784bef16f948325aa8967cd5c577d6c Feb 17 00:08:49 crc kubenswrapper[5109]: I0217 00:08:49.998860 5109 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 00:08:50 crc kubenswrapper[5109]: W0217 00:08:49.999996 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-1b50f66c937e5de876c98bd55867f8819b081e762dff37b0136c9e4fc78944b3 WatchSource:0}: Error finding container 1b50f66c937e5de876c98bd55867f8819b081e762dff37b0136c9e4fc78944b3: Status 404 returned error can't find the container with id 1b50f66c937e5de876c98bd55867f8819b081e762dff37b0136c9e4fc78944b3 Feb 17 00:08:50 crc kubenswrapper[5109]: W0217 00:08:50.001315 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-17410ef27c1c801ca1aee32cac727673e68eb64e2c91092fa55eec1167bcdad7 WatchSource:0}: Error finding container 17410ef27c1c801ca1aee32cac727673e68eb64e2c91092fa55eec1167bcdad7: Status 404 returned error can't find the container with id 17410ef27c1c801ca1aee32cac727673e68eb64e2c91092fa55eec1167bcdad7 Feb 17 00:08:50 crc kubenswrapper[5109]: E0217 00:08:50.003037 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="800ms" Feb 17 00:08:50 crc kubenswrapper[5109]: W0217 00:08:50.003470 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-e277464112b32f808bd815ec1e74c82b77e7feb5afa25052e9c502b0c00ac736 WatchSource:0}: Error finding container e277464112b32f808bd815ec1e74c82b77e7feb5afa25052e9c502b0c00ac736: Status 404 returned error can't find the container with id e277464112b32f808bd815ec1e74c82b77e7feb5afa25052e9c502b0c00ac736 Feb 17 00:08:50 crc kubenswrapper[5109]: I0217 00:08:50.238933 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:50 crc kubenswrapper[5109]: I0217 00:08:50.240107 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:50 crc kubenswrapper[5109]: I0217 00:08:50.240139 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:50 crc kubenswrapper[5109]: I0217 00:08:50.240164 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:50 crc kubenswrapper[5109]: I0217 00:08:50.240189 5109 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 17 00:08:50 crc kubenswrapper[5109]: E0217 00:08:50.240671 5109 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.199:6443: connect: connection refused" node="crc" Feb 17 00:08:50 crc kubenswrapper[5109]: I0217 00:08:50.366452 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Feb 17 00:08:50 crc kubenswrapper[5109]: E0217 00:08:50.422112 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 17 00:08:50 crc kubenswrapper[5109]: I0217 00:08:50.470628 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"e277464112b32f808bd815ec1e74c82b77e7feb5afa25052e9c502b0c00ac736"} Feb 17 00:08:50 crc kubenswrapper[5109]: I0217 00:08:50.472703 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"17410ef27c1c801ca1aee32cac727673e68eb64e2c91092fa55eec1167bcdad7"} Feb 17 00:08:50 crc kubenswrapper[5109]: I0217 00:08:50.473972 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"1b50f66c937e5de876c98bd55867f8819b081e762dff37b0136c9e4fc78944b3"} Feb 17 00:08:50 crc kubenswrapper[5109]: I0217 00:08:50.475680 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"a20c1929fdd0351d99b6ce4f1b337d034784bef16f948325aa8967cd5c577d6c"} Feb 17 00:08:50 crc kubenswrapper[5109]: I0217 00:08:50.477574 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"a6a84a879b16f279015fa86d4278549af0fdc67867789b6efe992d2c85298446"} Feb 17 00:08:50 crc kubenswrapper[5109]: E0217 00:08:50.494508 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 17 00:08:50 crc kubenswrapper[5109]: E0217 00:08:50.661700 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 17 00:08:50 crc kubenswrapper[5109]: E0217 00:08:50.671139 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 17 00:08:50 crc kubenswrapper[5109]: E0217 00:08:50.804258 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="1.6s" Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.040772 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.041499 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.041540 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.041551 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.041574 5109 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 17 00:08:51 crc kubenswrapper[5109]: E0217 00:08:51.042172 5109 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.199:6443: connect: connection refused" node="crc" Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.289353 5109 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 17 00:08:51 crc kubenswrapper[5109]: E0217 00:08:51.290674 5109 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.365962 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.484050 5109 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="a94f5092ece221bd275685b40e310ce0ef6a4928f5127934f7287e842dd65a6d" exitCode=0 Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.484141 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"a94f5092ece221bd275685b40e310ce0ef6a4928f5127934f7287e842dd65a6d"} Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.484246 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.485162 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.485226 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.485247 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:51 crc kubenswrapper[5109]: E0217 00:08:51.485532 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.487072 5109 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="55504d04baf9b16a365257fe21ede930d563e3b29efbd5d90657b03324866a57" exitCode=0 Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.487115 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"55504d04baf9b16a365257fe21ede930d563e3b29efbd5d90657b03324866a57"} Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.487363 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.488498 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.488563 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.488584 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:51 crc kubenswrapper[5109]: E0217 00:08:51.488983 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.489992 5109 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="ca21493ca46d81a601c2764ee7fb2c01a69162decccae8714c7a03d21c681335" exitCode=0 Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.490120 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"ca21493ca46d81a601c2764ee7fb2c01a69162decccae8714c7a03d21c681335"} Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.490280 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.491561 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.491608 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.491624 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:51 crc kubenswrapper[5109]: E0217 00:08:51.491864 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.494222 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"9276ef83ae446d7739ce65ad4e09a455b6208b9c7a53629a957811a2911843f5"} Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.494260 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"6724cb7115224b3beafe0f51aabd64e0d52a6101b64fe1dd025f1b91232bc384"} Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.496908 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.496940 5109 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="a48ddaae74ae155d492a43794ecc243ec32762eb7a934225e66d824ef33860b7" exitCode=0 Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.497008 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"a48ddaae74ae155d492a43794ecc243ec32762eb7a934225e66d824ef33860b7"} Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.497101 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.497617 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.497652 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.497664 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:51 crc kubenswrapper[5109]: E0217 00:08:51.497831 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.498061 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.498125 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:51 crc kubenswrapper[5109]: I0217 00:08:51.498152 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.366609 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.199:6443: connect: connection refused Feb 17 00:08:52 crc kubenswrapper[5109]: E0217 00:08:52.405663 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="3.2s" Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.501245 5109 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="bb9fc62050994fe5e33821e86436a027f0bdc0001cfb5d2911c514e6ebea56f5" exitCode=0 Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.501319 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"bb9fc62050994fe5e33821e86436a027f0bdc0001cfb5d2911c514e6ebea56f5"} Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.501471 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.502619 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.502676 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.502694 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:52 crc kubenswrapper[5109]: E0217 00:08:52.502936 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.504219 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"3305e1572aeb5114f3fc14234cbfd9910730408fc735ebc9138bae8714cb54ff"} Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.504745 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.505220 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.505271 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.505289 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:52 crc kubenswrapper[5109]: E0217 00:08:52.505540 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.507175 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"469a7ec86233162cfa2e546021f0055071466a80a75fa48b2a3473e405edc680"} Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.507206 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"2b47604e1052f54fb2275dce07a558601c0b4bfad7005f60c44e8dce92e3005c"} Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.507219 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"c46f62e403fea3cdc8be34db5c36b31a122fe6d78a6308dc28830431c5dc7b06"} Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.507626 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.508241 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.508277 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.508290 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:52 crc kubenswrapper[5109]: E0217 00:08:52.508485 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.510361 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"af55cbbead47c368e6b0f722919904deb3d9d48d3adce23a62ad46c724d958d7"} Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.510393 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"6da90479a27e9f43b61bf6fbe3714fcc9ba9c95fef7ebdd87c26d235bcfe52db"} Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.510410 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"d2a3c7b462f74b168b5fe77dc689e662f7b9e5e31f3a142f5b5d291d787b7a30"} Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.510426 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"a0ea5c7fda47fd144d19a40ef150d8139d28fa518cc105121ef56acaa6a89144"} Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.512886 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"a2699fb5c0bc449f918758c90f560033cfa82671cbb60f1d79c7a1dda888f194"} Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.512916 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"d6dd4d88205d07bc393ac21615ad8ae1693766d7476589071215e1093a4d832e"} Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.513029 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.513479 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.513516 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.513532 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:52 crc kubenswrapper[5109]: E0217 00:08:52.513790 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.643850 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.648032 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.648080 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.648091 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.648117 5109 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 17 00:08:52 crc kubenswrapper[5109]: E0217 00:08:52.648635 5109 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.199:6443: connect: connection refused" node="crc" Feb 17 00:08:52 crc kubenswrapper[5109]: E0217 00:08:52.735108 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 17 00:08:52 crc kubenswrapper[5109]: I0217 00:08:52.823887 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:08:52 crc kubenswrapper[5109]: E0217 00:08:52.999010 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.199:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 17 00:08:53 crc kubenswrapper[5109]: I0217 00:08:53.524213 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"9ea2321308f270e100d48c0fc714e100b5f72a5cf4e9194672cea9dd3b1e99bc"} Feb 17 00:08:53 crc kubenswrapper[5109]: I0217 00:08:53.524480 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:53 crc kubenswrapper[5109]: I0217 00:08:53.525562 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:53 crc kubenswrapper[5109]: I0217 00:08:53.525647 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:53 crc kubenswrapper[5109]: I0217 00:08:53.525671 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:53 crc kubenswrapper[5109]: E0217 00:08:53.526064 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:08:53 crc kubenswrapper[5109]: I0217 00:08:53.528018 5109 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="cdb84989ffbaeb987f1b666d66905dcd1e8c997c1ffaf47e890b744793ff4fc5" exitCode=0 Feb 17 00:08:53 crc kubenswrapper[5109]: I0217 00:08:53.528297 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:53 crc kubenswrapper[5109]: I0217 00:08:53.528830 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:53 crc kubenswrapper[5109]: I0217 00:08:53.529129 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"cdb84989ffbaeb987f1b666d66905dcd1e8c997c1ffaf47e890b744793ff4fc5"} Feb 17 00:08:53 crc kubenswrapper[5109]: I0217 00:08:53.529318 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:53 crc kubenswrapper[5109]: I0217 00:08:53.529787 5109 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 00:08:53 crc kubenswrapper[5109]: I0217 00:08:53.529880 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:53 crc kubenswrapper[5109]: I0217 00:08:53.530649 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:53 crc kubenswrapper[5109]: I0217 00:08:53.530683 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:53 crc kubenswrapper[5109]: I0217 00:08:53.530697 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:53 crc kubenswrapper[5109]: E0217 00:08:53.531044 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:08:53 crc kubenswrapper[5109]: I0217 00:08:53.531496 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:53 crc kubenswrapper[5109]: I0217 00:08:53.531531 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:53 crc kubenswrapper[5109]: I0217 00:08:53.531547 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:53 crc kubenswrapper[5109]: E0217 00:08:53.531891 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:08:53 crc kubenswrapper[5109]: I0217 00:08:53.532283 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:53 crc kubenswrapper[5109]: I0217 00:08:53.532318 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:53 crc kubenswrapper[5109]: I0217 00:08:53.532336 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:53 crc kubenswrapper[5109]: E0217 00:08:53.532579 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:08:53 crc kubenswrapper[5109]: I0217 00:08:53.532922 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:53 crc kubenswrapper[5109]: I0217 00:08:53.532957 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:53 crc kubenswrapper[5109]: I0217 00:08:53.532972 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:53 crc kubenswrapper[5109]: E0217 00:08:53.533170 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:08:54 crc kubenswrapper[5109]: I0217 00:08:54.180077 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 00:08:54 crc kubenswrapper[5109]: I0217 00:08:54.536360 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"a1ed21950236bfcef3e4e43b1b1b51807c2bf962afafe52fea0c329bb95ec8a1"} Feb 17 00:08:54 crc kubenswrapper[5109]: I0217 00:08:54.536410 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"9240a3bec4000d7ca27bea3887268965add2d848b7ce30fe954758922063461e"} Feb 17 00:08:54 crc kubenswrapper[5109]: I0217 00:08:54.536423 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"9ac2793428d71aba7d6d42ce84de49139be1ce4d8ef3f17f38d753a042d9b7e6"} Feb 17 00:08:54 crc kubenswrapper[5109]: I0217 00:08:54.536433 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"adca90b4cbebf962d60acc3a5facc178862fbd9dc66075a6ddb72c746ecf6c72"} Feb 17 00:08:54 crc kubenswrapper[5109]: I0217 00:08:54.536726 5109 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 00:08:54 crc kubenswrapper[5109]: I0217 00:08:54.536771 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:54 crc kubenswrapper[5109]: I0217 00:08:54.536876 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:54 crc kubenswrapper[5109]: I0217 00:08:54.536877 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:54 crc kubenswrapper[5109]: I0217 00:08:54.537263 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:54 crc kubenswrapper[5109]: I0217 00:08:54.537289 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:54 crc kubenswrapper[5109]: I0217 00:08:54.537303 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:54 crc kubenswrapper[5109]: I0217 00:08:54.537505 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:54 crc kubenswrapper[5109]: I0217 00:08:54.537553 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:54 crc kubenswrapper[5109]: I0217 00:08:54.537573 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:54 crc kubenswrapper[5109]: E0217 00:08:54.537681 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:08:54 crc kubenswrapper[5109]: I0217 00:08:54.537826 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:54 crc kubenswrapper[5109]: I0217 00:08:54.537873 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:54 crc kubenswrapper[5109]: I0217 00:08:54.537897 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:54 crc kubenswrapper[5109]: E0217 00:08:54.538125 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:08:54 crc kubenswrapper[5109]: E0217 00:08:54.538467 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:08:55 crc kubenswrapper[5109]: I0217 00:08:55.533110 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:08:55 crc kubenswrapper[5109]: I0217 00:08:55.545517 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"d4b1dec918be4294e36e17ec1d41fe25afc4d62900b5d238a1765dff29cb3479"} Feb 17 00:08:55 crc kubenswrapper[5109]: I0217 00:08:55.545691 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:55 crc kubenswrapper[5109]: I0217 00:08:55.545769 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:55 crc kubenswrapper[5109]: I0217 00:08:55.546581 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:55 crc kubenswrapper[5109]: I0217 00:08:55.546688 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:55 crc kubenswrapper[5109]: I0217 00:08:55.546691 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:55 crc kubenswrapper[5109]: I0217 00:08:55.546725 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:55 crc kubenswrapper[5109]: I0217 00:08:55.546774 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:55 crc kubenswrapper[5109]: I0217 00:08:55.546972 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:55 crc kubenswrapper[5109]: E0217 00:08:55.547319 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:08:55 crc kubenswrapper[5109]: E0217 00:08:55.547488 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:08:55 crc kubenswrapper[5109]: I0217 00:08:55.601287 5109 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 17 00:08:55 crc kubenswrapper[5109]: I0217 00:08:55.849243 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:55 crc kubenswrapper[5109]: I0217 00:08:55.850497 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:55 crc kubenswrapper[5109]: I0217 00:08:55.850645 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:55 crc kubenswrapper[5109]: I0217 00:08:55.850677 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:55 crc kubenswrapper[5109]: I0217 00:08:55.850725 5109 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 17 00:08:56 crc kubenswrapper[5109]: I0217 00:08:56.548498 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:56 crc kubenswrapper[5109]: I0217 00:08:56.549661 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:56 crc kubenswrapper[5109]: I0217 00:08:56.549734 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:56 crc kubenswrapper[5109]: I0217 00:08:56.549762 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:56 crc kubenswrapper[5109]: E0217 00:08:56.550716 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:08:56 crc kubenswrapper[5109]: I0217 00:08:56.580164 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:08:56 crc kubenswrapper[5109]: I0217 00:08:56.580452 5109 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 00:08:56 crc kubenswrapper[5109]: I0217 00:08:56.580511 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:56 crc kubenswrapper[5109]: I0217 00:08:56.581760 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:56 crc kubenswrapper[5109]: I0217 00:08:56.581807 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:56 crc kubenswrapper[5109]: I0217 00:08:56.581821 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:56 crc kubenswrapper[5109]: E0217 00:08:56.582172 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:08:57 crc kubenswrapper[5109]: I0217 00:08:57.290474 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 17 00:08:57 crc kubenswrapper[5109]: I0217 00:08:57.550913 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:57 crc kubenswrapper[5109]: I0217 00:08:57.552115 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:57 crc kubenswrapper[5109]: I0217 00:08:57.552181 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:57 crc kubenswrapper[5109]: I0217 00:08:57.552201 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:57 crc kubenswrapper[5109]: E0217 00:08:57.552952 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:08:57 crc kubenswrapper[5109]: I0217 00:08:57.886282 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Feb 17 00:08:58 crc kubenswrapper[5109]: I0217 00:08:58.062821 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:08:58 crc kubenswrapper[5109]: I0217 00:08:58.063150 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:58 crc kubenswrapper[5109]: I0217 00:08:58.065628 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:58 crc kubenswrapper[5109]: I0217 00:08:58.065705 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:58 crc kubenswrapper[5109]: I0217 00:08:58.065734 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:58 crc kubenswrapper[5109]: E0217 00:08:58.066540 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:08:58 crc kubenswrapper[5109]: I0217 00:08:58.076121 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:08:58 crc kubenswrapper[5109]: I0217 00:08:58.216304 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:08:58 crc kubenswrapper[5109]: I0217 00:08:58.216554 5109 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 00:08:58 crc kubenswrapper[5109]: I0217 00:08:58.216607 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:58 crc kubenswrapper[5109]: I0217 00:08:58.217527 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:58 crc kubenswrapper[5109]: I0217 00:08:58.217639 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:58 crc kubenswrapper[5109]: I0217 00:08:58.217663 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:58 crc kubenswrapper[5109]: E0217 00:08:58.218403 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:08:58 crc kubenswrapper[5109]: I0217 00:08:58.553807 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:58 crc kubenswrapper[5109]: I0217 00:08:58.553896 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:58 crc kubenswrapper[5109]: I0217 00:08:58.554672 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:58 crc kubenswrapper[5109]: I0217 00:08:58.554699 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:58 crc kubenswrapper[5109]: I0217 00:08:58.554711 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:58 crc kubenswrapper[5109]: I0217 00:08:58.554739 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:58 crc kubenswrapper[5109]: I0217 00:08:58.554778 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:58 crc kubenswrapper[5109]: I0217 00:08:58.554800 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:58 crc kubenswrapper[5109]: E0217 00:08:58.555014 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:08:58 crc kubenswrapper[5109]: E0217 00:08:58.555649 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:08:58 crc kubenswrapper[5109]: I0217 00:08:58.875870 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:08:58 crc kubenswrapper[5109]: I0217 00:08:58.876123 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:08:58 crc kubenswrapper[5109]: I0217 00:08:58.877037 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:08:58 crc kubenswrapper[5109]: I0217 00:08:58.877107 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:08:58 crc kubenswrapper[5109]: I0217 00:08:58.877121 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:08:58 crc kubenswrapper[5109]: E0217 00:08:58.877570 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:08:59 crc kubenswrapper[5109]: E0217 00:08:59.535274 5109 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 00:09:00 crc kubenswrapper[5109]: I0217 00:09:00.557453 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:09:00 crc kubenswrapper[5109]: I0217 00:09:00.557810 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:09:00 crc kubenswrapper[5109]: I0217 00:09:00.559277 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:00 crc kubenswrapper[5109]: I0217 00:09:00.559336 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:00 crc kubenswrapper[5109]: I0217 00:09:00.559347 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:00 crc kubenswrapper[5109]: E0217 00:09:00.559742 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:09:00 crc kubenswrapper[5109]: I0217 00:09:00.565324 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:09:01 crc kubenswrapper[5109]: I0217 00:09:01.562858 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:09:01 crc kubenswrapper[5109]: I0217 00:09:01.563983 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:01 crc kubenswrapper[5109]: I0217 00:09:01.564043 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:01 crc kubenswrapper[5109]: I0217 00:09:01.564061 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:01 crc kubenswrapper[5109]: E0217 00:09:01.564532 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:09:03 crc kubenswrapper[5109]: I0217 00:09:03.226230 5109 trace.go:236] Trace[1973012151]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 00:08:53.224) (total time: 10001ms): Feb 17 00:09:03 crc kubenswrapper[5109]: Trace[1973012151]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:09:03.226) Feb 17 00:09:03 crc kubenswrapper[5109]: Trace[1973012151]: [10.001671027s] [10.001671027s] END Feb 17 00:09:03 crc kubenswrapper[5109]: E0217 00:09:03.226272 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 17 00:09:03 crc kubenswrapper[5109]: I0217 00:09:03.367941 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 17 00:09:03 crc kubenswrapper[5109]: I0217 00:09:03.392237 5109 trace.go:236] Trace[1168361788]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 00:08:53.390) (total time: 10001ms): Feb 17 00:09:03 crc kubenswrapper[5109]: Trace[1168361788]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:09:03.392) Feb 17 00:09:03 crc kubenswrapper[5109]: Trace[1168361788]: [10.001980875s] [10.001980875s] END Feb 17 00:09:03 crc kubenswrapper[5109]: E0217 00:09:03.392273 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 17 00:09:03 crc kubenswrapper[5109]: I0217 00:09:03.558487 5109 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 00:09:03 crc kubenswrapper[5109]: I0217 00:09:03.558562 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 00:09:04 crc kubenswrapper[5109]: E0217 00:09:04.131906 5109 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.1894e013dd73b89d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:49.387796637 +0000 UTC m=+0.719351395,LastTimestamp:2026-02-17 00:08:49.387796637 +0000 UTC m=+0.719351395,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:04 crc kubenswrapper[5109]: I0217 00:09:04.193602 5109 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 17 00:09:04 crc kubenswrapper[5109]: I0217 00:09:04.193679 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 17 00:09:04 crc kubenswrapper[5109]: I0217 00:09:04.201035 5109 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 17 00:09:04 crc kubenswrapper[5109]: I0217 00:09:04.201092 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 17 00:09:04 crc kubenswrapper[5109]: I0217 00:09:04.572343 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Feb 17 00:09:04 crc kubenswrapper[5109]: I0217 00:09:04.573847 5109 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="9ea2321308f270e100d48c0fc714e100b5f72a5cf4e9194672cea9dd3b1e99bc" exitCode=255 Feb 17 00:09:04 crc kubenswrapper[5109]: I0217 00:09:04.573923 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"9ea2321308f270e100d48c0fc714e100b5f72a5cf4e9194672cea9dd3b1e99bc"} Feb 17 00:09:04 crc kubenswrapper[5109]: I0217 00:09:04.574216 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:09:04 crc kubenswrapper[5109]: I0217 00:09:04.575013 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:04 crc kubenswrapper[5109]: I0217 00:09:04.575079 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:04 crc kubenswrapper[5109]: I0217 00:09:04.575098 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:04 crc kubenswrapper[5109]: E0217 00:09:04.575688 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:09:04 crc kubenswrapper[5109]: I0217 00:09:04.576085 5109 scope.go:117] "RemoveContainer" containerID="9ea2321308f270e100d48c0fc714e100b5f72a5cf4e9194672cea9dd3b1e99bc" Feb 17 00:09:05 crc kubenswrapper[5109]: I0217 00:09:05.579844 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Feb 17 00:09:05 crc kubenswrapper[5109]: I0217 00:09:05.583431 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"aeec71cf3d683cf9805abac3a495dcc38b9b4cc9c777d6a380929127eaee8b64"} Feb 17 00:09:05 crc kubenswrapper[5109]: I0217 00:09:05.583763 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:09:05 crc kubenswrapper[5109]: I0217 00:09:05.584702 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:05 crc kubenswrapper[5109]: I0217 00:09:05.584763 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:05 crc kubenswrapper[5109]: I0217 00:09:05.584782 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:05 crc kubenswrapper[5109]: E0217 00:09:05.585322 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:09:05 crc kubenswrapper[5109]: E0217 00:09:05.607206 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 17 00:09:07 crc kubenswrapper[5109]: I0217 00:09:07.333053 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 17 00:09:07 crc kubenswrapper[5109]: I0217 00:09:07.333281 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:09:07 crc kubenswrapper[5109]: I0217 00:09:07.339024 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:07 crc kubenswrapper[5109]: I0217 00:09:07.339307 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:07 crc kubenswrapper[5109]: I0217 00:09:07.339393 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:07 crc kubenswrapper[5109]: E0217 00:09:07.339964 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:09:07 crc kubenswrapper[5109]: I0217 00:09:07.352896 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 17 00:09:07 crc kubenswrapper[5109]: E0217 00:09:07.442023 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 17 00:09:07 crc kubenswrapper[5109]: I0217 00:09:07.589039 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:09:07 crc kubenswrapper[5109]: I0217 00:09:07.589713 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:07 crc kubenswrapper[5109]: I0217 00:09:07.589755 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:07 crc kubenswrapper[5109]: I0217 00:09:07.589769 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:07 crc kubenswrapper[5109]: E0217 00:09:07.590247 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:09:08 crc kubenswrapper[5109]: I0217 00:09:08.225539 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:09:08 crc kubenswrapper[5109]: I0217 00:09:08.225791 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:09:08 crc kubenswrapper[5109]: I0217 00:09:08.225866 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:09:08 crc kubenswrapper[5109]: I0217 00:09:08.226916 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:08 crc kubenswrapper[5109]: I0217 00:09:08.226989 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:08 crc kubenswrapper[5109]: I0217 00:09:08.227012 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:08 crc kubenswrapper[5109]: E0217 00:09:08.227738 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:09:08 crc kubenswrapper[5109]: I0217 00:09:08.231547 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:09:08 crc kubenswrapper[5109]: I0217 00:09:08.591949 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:09:08 crc kubenswrapper[5109]: I0217 00:09:08.592807 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:08 crc kubenswrapper[5109]: I0217 00:09:08.592853 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:08 crc kubenswrapper[5109]: I0217 00:09:08.592865 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:08 crc kubenswrapper[5109]: E0217 00:09:08.593345 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:09:09 crc kubenswrapper[5109]: I0217 00:09:09.190837 5109 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 17 00:09:09 crc kubenswrapper[5109]: I0217 00:09:09.190861 5109 trace.go:236] Trace[1920448046]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 00:08:57.439) (total time: 11751ms): Feb 17 00:09:09 crc kubenswrapper[5109]: Trace[1920448046]: ---"Objects listed" error:runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope 11751ms (00:09:09.190) Feb 17 00:09:09 crc kubenswrapper[5109]: Trace[1920448046]: [11.751216687s] [11.751216687s] END Feb 17 00:09:09 crc kubenswrapper[5109]: E0217 00:09:09.191108 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 17 00:09:09 crc kubenswrapper[5109]: I0217 00:09:09.190973 5109 trace.go:236] Trace[1364085293]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 00:08:56.802) (total time: 12388ms): Feb 17 00:09:09 crc kubenswrapper[5109]: Trace[1364085293]: ---"Objects listed" error:services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope 12388ms (00:09:09.190) Feb 17 00:09:09 crc kubenswrapper[5109]: Trace[1364085293]: [12.388047734s] [12.388047734s] END Feb 17 00:09:09 crc kubenswrapper[5109]: E0217 00:09:09.191139 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 17 00:09:09 crc kubenswrapper[5109]: E0217 00:09:09.193418 5109 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 17 00:09:09 crc kubenswrapper[5109]: I0217 00:09:09.375293 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:09 crc kubenswrapper[5109]: E0217 00:09:09.535531 5109 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 00:09:09 crc kubenswrapper[5109]: I0217 00:09:09.593571 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:09:09 crc kubenswrapper[5109]: I0217 00:09:09.595019 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:09 crc kubenswrapper[5109]: I0217 00:09:09.595053 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:09 crc kubenswrapper[5109]: I0217 00:09:09.595063 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:09 crc kubenswrapper[5109]: E0217 00:09:09.595392 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:09:09 crc kubenswrapper[5109]: E0217 00:09:09.680706 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 17 00:09:10 crc kubenswrapper[5109]: I0217 00:09:10.367081 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:10 crc kubenswrapper[5109]: I0217 00:09:10.565771 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:09:10 crc kubenswrapper[5109]: I0217 00:09:10.566157 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:09:10 crc kubenswrapper[5109]: I0217 00:09:10.567071 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:10 crc kubenswrapper[5109]: I0217 00:09:10.567112 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:10 crc kubenswrapper[5109]: I0217 00:09:10.567123 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:10 crc kubenswrapper[5109]: E0217 00:09:10.567409 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:09:10 crc kubenswrapper[5109]: I0217 00:09:10.572494 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:09:10 crc kubenswrapper[5109]: I0217 00:09:10.595800 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:09:10 crc kubenswrapper[5109]: I0217 00:09:10.597058 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:10 crc kubenswrapper[5109]: I0217 00:09:10.597108 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:10 crc kubenswrapper[5109]: I0217 00:09:10.597137 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:10 crc kubenswrapper[5109]: E0217 00:09:10.597544 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:09:11 crc kubenswrapper[5109]: I0217 00:09:11.375061 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:11 crc kubenswrapper[5109]: I0217 00:09:11.599115 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 17 00:09:11 crc kubenswrapper[5109]: I0217 00:09:11.599709 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Feb 17 00:09:11 crc kubenswrapper[5109]: I0217 00:09:11.601555 5109 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="aeec71cf3d683cf9805abac3a495dcc38b9b4cc9c777d6a380929127eaee8b64" exitCode=255 Feb 17 00:09:11 crc kubenswrapper[5109]: I0217 00:09:11.601670 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"aeec71cf3d683cf9805abac3a495dcc38b9b4cc9c777d6a380929127eaee8b64"} Feb 17 00:09:11 crc kubenswrapper[5109]: I0217 00:09:11.601744 5109 scope.go:117] "RemoveContainer" containerID="9ea2321308f270e100d48c0fc714e100b5f72a5cf4e9194672cea9dd3b1e99bc" Feb 17 00:09:11 crc kubenswrapper[5109]: I0217 00:09:11.601912 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:09:11 crc kubenswrapper[5109]: I0217 00:09:11.602385 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:11 crc kubenswrapper[5109]: I0217 00:09:11.602427 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:11 crc kubenswrapper[5109]: I0217 00:09:11.602441 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:11 crc kubenswrapper[5109]: E0217 00:09:11.602844 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:09:11 crc kubenswrapper[5109]: I0217 00:09:11.603162 5109 scope.go:117] "RemoveContainer" containerID="aeec71cf3d683cf9805abac3a495dcc38b9b4cc9c777d6a380929127eaee8b64" Feb 17 00:09:11 crc kubenswrapper[5109]: E0217 00:09:11.603409 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 17 00:09:12 crc kubenswrapper[5109]: E0217 00:09:12.012260 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 17 00:09:12 crc kubenswrapper[5109]: I0217 00:09:12.370350 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:12 crc kubenswrapper[5109]: I0217 00:09:12.605727 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 17 00:09:13 crc kubenswrapper[5109]: I0217 00:09:13.373660 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.140558 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894e013dd73b89d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:49.387796637 +0000 UTC m=+0.719351395,LastTimestamp:2026-02-17 00:08:49.387796637 +0000 UTC m=+0.719351395,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.148576 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894e013e16ccc52 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:49.454451794 +0000 UTC m=+0.786006562,LastTimestamp:2026-02-17 00:08:49.454451794 +0000 UTC m=+0.786006562,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.156125 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894e013e16d7f12 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:49.454497554 +0000 UTC m=+0.786052312,LastTimestamp:2026-02-17 00:08:49.454497554 +0000 UTC m=+0.786052312,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.164432 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894e013e16dd372 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:49.454519154 +0000 UTC m=+0.786073912,LastTimestamp:2026-02-17 00:08:49.454519154 +0000 UTC m=+0.786073912,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.172796 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894e013e619015d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:49.532846429 +0000 UTC m=+0.864401187,LastTimestamp:2026-02-17 00:08:49.532846429 +0000 UTC m=+0.864401187,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.180854 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894e013e16ccc52\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894e013e16ccc52 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:49.454451794 +0000 UTC m=+0.786006562,LastTimestamp:2026-02-17 00:08:49.565949708 +0000 UTC m=+0.897504466,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.188651 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894e013e16d7f12\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894e013e16d7f12 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:49.454497554 +0000 UTC m=+0.786052312,LastTimestamp:2026-02-17 00:08:49.565981198 +0000 UTC m=+0.897535956,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.196025 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894e013e16dd372\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894e013e16dd372 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:49.454519154 +0000 UTC m=+0.786073912,LastTimestamp:2026-02-17 00:08:49.565990098 +0000 UTC m=+0.897544856,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.203684 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894e013e16ccc52\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894e013e16ccc52 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:49.454451794 +0000 UTC m=+0.786006562,LastTimestamp:2026-02-17 00:08:49.567939402 +0000 UTC m=+0.899494200,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.211427 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894e013e16ccc52\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894e013e16ccc52 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:49.454451794 +0000 UTC m=+0.786006562,LastTimestamp:2026-02-17 00:08:49.567971032 +0000 UTC m=+0.899525830,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.219572 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894e013e16d7f12\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894e013e16d7f12 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:49.454497554 +0000 UTC m=+0.786052312,LastTimestamp:2026-02-17 00:08:49.567981562 +0000 UTC m=+0.899536350,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.224790 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894e013e16d7f12\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894e013e16d7f12 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:49.454497554 +0000 UTC m=+0.786052312,LastTimestamp:2026-02-17 00:08:49.567998572 +0000 UTC m=+0.899553370,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.227642 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894e013e16dd372\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894e013e16dd372 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:49.454519154 +0000 UTC m=+0.786073912,LastTimestamp:2026-02-17 00:08:49.568016982 +0000 UTC m=+0.899571780,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.232427 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894e013e16dd372\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894e013e16dd372 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:49.454519154 +0000 UTC m=+0.786073912,LastTimestamp:2026-02-17 00:08:49.568051182 +0000 UTC m=+0.899605940,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.235404 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894e013e16ccc52\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894e013e16ccc52 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:49.454451794 +0000 UTC m=+0.786006562,LastTimestamp:2026-02-17 00:08:49.569403468 +0000 UTC m=+0.900958216,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.240097 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894e013e16d7f12\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894e013e16d7f12 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:49.454497554 +0000 UTC m=+0.786052312,LastTimestamp:2026-02-17 00:08:49.569417618 +0000 UTC m=+0.900972376,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.244352 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894e013e16dd372\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894e013e16dd372 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:49.454519154 +0000 UTC m=+0.786073912,LastTimestamp:2026-02-17 00:08:49.569429728 +0000 UTC m=+0.900984486,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.251759 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894e013e16ccc52\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894e013e16ccc52 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:49.454451794 +0000 UTC m=+0.786006562,LastTimestamp:2026-02-17 00:08:49.569833467 +0000 UTC m=+0.901388265,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.263417 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894e013e16d7f12\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894e013e16d7f12 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:49.454497554 +0000 UTC m=+0.786052312,LastTimestamp:2026-02-17 00:08:49.569863417 +0000 UTC m=+0.901418215,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.271276 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894e013e16dd372\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894e013e16dd372 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:49.454519154 +0000 UTC m=+0.786073912,LastTimestamp:2026-02-17 00:08:49.569881407 +0000 UTC m=+0.901436205,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.279465 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894e013e16ccc52\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894e013e16ccc52 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:49.454451794 +0000 UTC m=+0.786006562,LastTimestamp:2026-02-17 00:08:49.571252673 +0000 UTC m=+0.902807461,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.287544 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894e013e16ccc52\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894e013e16ccc52 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:49.454451794 +0000 UTC m=+0.786006562,LastTimestamp:2026-02-17 00:08:49.571276603 +0000 UTC m=+0.902831361,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.295204 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894e013e16d7f12\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894e013e16d7f12 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:49.454497554 +0000 UTC m=+0.786052312,LastTimestamp:2026-02-17 00:08:49.571292853 +0000 UTC m=+0.902847611,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.303304 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894e013e16dd372\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894e013e16dd372 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:49.454519154 +0000 UTC m=+0.786073912,LastTimestamp:2026-02-17 00:08:49.571302773 +0000 UTC m=+0.902857531,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.311239 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894e013e16d7f12\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894e013e16d7f12 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:49.454497554 +0000 UTC m=+0.786052312,LastTimestamp:2026-02-17 00:08:49.571349143 +0000 UTC m=+0.902903901,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.320956 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894e01401e70c76 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:49.999334518 +0000 UTC m=+1.330889306,LastTimestamp:2026-02-17 00:08:49.999334518 +0000 UTC m=+1.330889306,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.328890 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1894e01402011dce openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:50.001042894 +0000 UTC m=+1.332597672,LastTimestamp:2026-02-17 00:08:50.001042894 +0000 UTC m=+1.332597672,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.336674 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1894e0140243b4c8 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:50.00540692 +0000 UTC m=+1.336961738,LastTimestamp:2026-02-17 00:08:50.00540692 +0000 UTC m=+1.336961738,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.344494 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894e0140250d81d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:50.006267933 +0000 UTC m=+1.337822731,LastTimestamp:2026-02-17 00:08:50.006267933 +0000 UTC m=+1.337822731,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.352486 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1894e014025aa0b9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:50.006909113 +0000 UTC m=+1.338463901,LastTimestamp:2026-02-17 00:08:50.006909113 +0000 UTC m=+1.338463901,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.361333 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1894e0142ccc8120 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:50.7190152 +0000 UTC m=+2.050569958,LastTimestamp:2026-02-17 00:08:50.7190152 +0000 UTC m=+2.050569958,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.373444 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894e0142cd88292 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:50.719802002 +0000 UTC m=+2.051356770,LastTimestamp:2026-02-17 00:08:50.719802002 +0000 UTC m=+2.051356770,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: I0217 00:09:14.373648 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.379086 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1894e0142cda9c46 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:50.719939654 +0000 UTC m=+2.051494422,LastTimestamp:2026-02-17 00:08:50.719939654 +0000 UTC m=+2.051494422,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.387179 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1894e0142ce34115 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:50.720506133 +0000 UTC m=+2.052060891,LastTimestamp:2026-02-17 00:08:50.720506133 +0000 UTC m=+2.052060891,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.397496 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894e0142cec08f8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:50.721081592 +0000 UTC m=+2.052636350,LastTimestamp:2026-02-17 00:08:50.721081592 +0000 UTC m=+2.052636350,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.405860 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894e0142dcc97a9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:50.735798185 +0000 UTC m=+2.067352933,LastTimestamp:2026-02-17 00:08:50.735798185 +0000 UTC m=+2.067352933,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.411997 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894e0142dcd0617 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:50.735826455 +0000 UTC m=+2.067381213,LastTimestamp:2026-02-17 00:08:50.735826455 +0000 UTC m=+2.067381213,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.419634 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1894e0142dd85d60 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:50.736569696 +0000 UTC m=+2.068124454,LastTimestamp:2026-02-17 00:08:50.736569696 +0000 UTC m=+2.068124454,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.427307 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1894e0142dd95707 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:50.736633607 +0000 UTC m=+2.068188375,LastTimestamp:2026-02-17 00:08:50.736633607 +0000 UTC m=+2.068188375,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.433712 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1894e0142dd98700 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:50.736645888 +0000 UTC m=+2.068200646,LastTimestamp:2026-02-17 00:08:50.736645888 +0000 UTC m=+2.068200646,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.441008 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1894e0142dee3aec openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:50.738002668 +0000 UTC m=+2.069557426,LastTimestamp:2026-02-17 00:08:50.738002668 +0000 UTC m=+2.069557426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.448364 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1894e0143fa199e5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:51.034970597 +0000 UTC m=+2.366525365,LastTimestamp:2026-02-17 00:08:51.034970597 +0000 UTC m=+2.366525365,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.456974 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1894e01440661d6a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:51.047849322 +0000 UTC m=+2.379404120,LastTimestamp:2026-02-17 00:08:51.047849322 +0000 UTC m=+2.379404120,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.464464 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1894e0144077dd14 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:51.0490125 +0000 UTC m=+2.380567258,LastTimestamp:2026-02-17 00:08:51.0490125 +0000 UTC m=+2.380567258,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.471857 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1894e0145a93abc9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:51.487042505 +0000 UTC m=+2.818597313,LastTimestamp:2026-02-17 00:08:51.487042505 +0000 UTC m=+2.818597313,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.479640 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1894e0145ac422b2 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:51.490218674 +0000 UTC m=+2.821773442,LastTimestamp:2026-02-17 00:08:51.490218674 +0000 UTC m=+2.821773442,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.486067 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894e0145b232d06 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:51.496447238 +0000 UTC m=+2.828002066,LastTimestamp:2026-02-17 00:08:51.496447238 +0000 UTC m=+2.828002066,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.491811 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894e0145b610cc1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:51.500502209 +0000 UTC m=+2.832056967,LastTimestamp:2026-02-17 00:08:51.500502209 +0000 UTC m=+2.832056967,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.498625 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1894e01463f67387 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:51.644511111 +0000 UTC m=+2.976065869,LastTimestamp:2026-02-17 00:08:51.644511111 +0000 UTC m=+2.976065869,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.506355 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1894e01465f8dfb8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:51.678224312 +0000 UTC m=+3.009779070,LastTimestamp:2026-02-17 00:08:51.678224312 +0000 UTC m=+3.009779070,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.513937 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1894e01466158c6a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:51.68010353 +0000 UTC m=+3.011658288,LastTimestamp:2026-02-17 00:08:51.68010353 +0000 UTC m=+3.011658288,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.525567 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1894e0146abf1618 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:51.758323224 +0000 UTC m=+3.089877982,LastTimestamp:2026-02-17 00:08:51.758323224 +0000 UTC m=+3.089877982,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.533505 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1894e0146abf160e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:51.758323214 +0000 UTC m=+3.089877982,LastTimestamp:2026-02-17 00:08:51.758323214 +0000 UTC m=+3.089877982,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.538411 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894e0146b5982a1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:51.768443553 +0000 UTC m=+3.099998321,LastTimestamp:2026-02-17 00:08:51.768443553 +0000 UTC m=+3.099998321,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.540283 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894e0146b603728 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:51.768882984 +0000 UTC m=+3.100437732,LastTimestamp:2026-02-17 00:08:51.768882984 +0000 UTC m=+3.100437732,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.546705 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1894e0146b87c8b4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:51.771476148 +0000 UTC m=+3.103030906,LastTimestamp:2026-02-17 00:08:51.771476148 +0000 UTC m=+3.103030906,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.554076 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1894e0146b9beaee openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:51.77279563 +0000 UTC m=+3.104350388,LastTimestamp:2026-02-17 00:08:51.77279563 +0000 UTC m=+3.104350388,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.559940 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1894e0146ba8e623 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:51.773646371 +0000 UTC m=+3.105201129,LastTimestamp:2026-02-17 00:08:51.773646371 +0000 UTC m=+3.105201129,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.567369 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894e0146d36bd5e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:51.799719262 +0000 UTC m=+3.131274020,LastTimestamp:2026-02-17 00:08:51.799719262 +0000 UTC m=+3.131274020,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.574577 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894e0146d51a153 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:51.801481555 +0000 UTC m=+3.133036313,LastTimestamp:2026-02-17 00:08:51.801481555 +0000 UTC m=+3.133036313,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.581430 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894e0146dfbe244 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:51.8126393 +0000 UTC m=+3.144194058,LastTimestamp:2026-02-17 00:08:51.8126393 +0000 UTC m=+3.144194058,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.586913 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1894e014729f21e6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:51.890446822 +0000 UTC m=+3.222001580,LastTimestamp:2026-02-17 00:08:51.890446822 +0000 UTC m=+3.222001580,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.592070 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1894e01473cf1b98 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:51.910368152 +0000 UTC m=+3.241922910,LastTimestamp:2026-02-17 00:08:51.910368152 +0000 UTC m=+3.241922910,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.599400 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1894e0147a3a10cf openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:52.018041039 +0000 UTC m=+3.349595797,LastTimestamp:2026-02-17 00:08:52.018041039 +0000 UTC m=+3.349595797,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.606643 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894e0147ab6b28e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:52.02620891 +0000 UTC m=+3.357763668,LastTimestamp:2026-02-17 00:08:52.02620891 +0000 UTC m=+3.357763668,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.613657 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1894e0147b420483 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:52.035339395 +0000 UTC m=+3.366894173,LastTimestamp:2026-02-17 00:08:52.035339395 +0000 UTC m=+3.366894173,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.621213 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1894e0147b52e264 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:52.036444772 +0000 UTC m=+3.367999530,LastTimestamp:2026-02-17 00:08:52.036444772 +0000 UTC m=+3.367999530,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.629049 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894e0147b7626a9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:52.038756009 +0000 UTC m=+3.370310767,LastTimestamp:2026-02-17 00:08:52.038756009 +0000 UTC m=+3.370310767,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.635724 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894e0147b82ca5d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:52.039584349 +0000 UTC m=+3.371139107,LastTimestamp:2026-02-17 00:08:52.039584349 +0000 UTC m=+3.371139107,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.642261 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894e01487a51d90 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:52.243160464 +0000 UTC m=+3.574715222,LastTimestamp:2026-02-17 00:08:52.243160464 +0000 UTC m=+3.574715222,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.649029 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1894e01487c1876a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:52.24502257 +0000 UTC m=+3.576577318,LastTimestamp:2026-02-17 00:08:52.24502257 +0000 UTC m=+3.576577318,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.656069 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894e01488bcfe47 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:52.261502535 +0000 UTC m=+3.593057293,LastTimestamp:2026-02-17 00:08:52.261502535 +0000 UTC m=+3.593057293,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.663317 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1894e01488cbe7df openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:52.262479839 +0000 UTC m=+3.594034607,LastTimestamp:2026-02-17 00:08:52.262479839 +0000 UTC m=+3.594034607,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.672406 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894e01488ce7571 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:52.262647153 +0000 UTC m=+3.594201911,LastTimestamp:2026-02-17 00:08:52.262647153 +0000 UTC m=+3.594201911,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.677124 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894e01495e7b932 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:52.482406706 +0000 UTC m=+3.813961474,LastTimestamp:2026-02-17 00:08:52.482406706 +0000 UTC m=+3.813961474,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.683493 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894e01496f672ce openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:52.500148942 +0000 UTC m=+3.831703720,LastTimestamp:2026-02-17 00:08:52.500148942 +0000 UTC m=+3.831703720,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.713910 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894e01497040ba2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:52.501040034 +0000 UTC m=+3.832594802,LastTimestamp:2026-02-17 00:08:52.501040034 +0000 UTC m=+3.832594802,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.722129 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894e01497401559 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:52.504974681 +0000 UTC m=+3.836529449,LastTimestamp:2026-02-17 00:08:52.504974681 +0000 UTC m=+3.836529449,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.731909 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894e014a6d4a1cf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:52.766368207 +0000 UTC m=+4.097922955,LastTimestamp:2026-02-17 00:08:52.766368207 +0000 UTC m=+4.097922955,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.748097 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894e014a6e2a605 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:52.767286789 +0000 UTC m=+4.098841547,LastTimestamp:2026-02-17 00:08:52.767286789 +0000 UTC m=+4.098841547,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.753521 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894e014a78484e4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:52.77789514 +0000 UTC m=+4.109449898,LastTimestamp:2026-02-17 00:08:52.77789514 +0000 UTC m=+4.109449898,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.758539 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894e014a7f1fad5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:52.785068757 +0000 UTC m=+4.116623515,LastTimestamp:2026-02-17 00:08:52.785068757 +0000 UTC m=+4.116623515,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.764142 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894e014d4a6410a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:53.535080714 +0000 UTC m=+4.866635482,LastTimestamp:2026-02-17 00:08:53.535080714 +0000 UTC m=+4.866635482,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.769663 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894e014e1d6fc21 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:53.756378145 +0000 UTC m=+5.087932913,LastTimestamp:2026-02-17 00:08:53.756378145 +0000 UTC m=+5.087932913,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.775649 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894e014e28e9567 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:53.768410471 +0000 UTC m=+5.099965229,LastTimestamp:2026-02-17 00:08:53.768410471 +0000 UTC m=+5.099965229,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.780490 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894e014e29e5632 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:53.769442866 +0000 UTC m=+5.100997624,LastTimestamp:2026-02-17 00:08:53.769442866 +0000 UTC m=+5.100997624,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.785272 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894e014f3669b02 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:54.051003138 +0000 UTC m=+5.382557896,LastTimestamp:2026-02-17 00:08:54.051003138 +0000 UTC m=+5.382557896,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.791176 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894e014f43dd7f9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:54.065108985 +0000 UTC m=+5.396663783,LastTimestamp:2026-02-17 00:08:54.065108985 +0000 UTC m=+5.396663783,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.796850 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894e014f453f869 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:54.066559081 +0000 UTC m=+5.398113859,LastTimestamp:2026-02-17 00:08:54.066559081 +0000 UTC m=+5.398113859,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.802055 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894e0150200c505 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:54.295987461 +0000 UTC m=+5.627542229,LastTimestamp:2026-02-17 00:08:54.295987461 +0000 UTC m=+5.627542229,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.811147 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894e015031d5ae2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:54.31463805 +0000 UTC m=+5.646192848,LastTimestamp:2026-02-17 00:08:54.31463805 +0000 UTC m=+5.646192848,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.817438 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894e01503327a84 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:54.316022404 +0000 UTC m=+5.647577172,LastTimestamp:2026-02-17 00:08:54.316022404 +0000 UTC m=+5.647577172,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.822501 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894e0150f247aab openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:54.516431531 +0000 UTC m=+5.847986329,LastTimestamp:2026-02-17 00:08:54.516431531 +0000 UTC m=+5.847986329,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.830462 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894e015101bfd7a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:54.53265241 +0000 UTC m=+5.864207178,LastTimestamp:2026-02-17 00:08:54.53265241 +0000 UTC m=+5.864207178,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.835308 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894e01510392b3a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:54.534564666 +0000 UTC m=+5.866119444,LastTimestamp:2026-02-17 00:08:54.534564666 +0000 UTC m=+5.866119444,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.840998 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894e0151c6ff26a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:54.739481194 +0000 UTC m=+6.071035942,LastTimestamp:2026-02-17 00:08:54.739481194 +0000 UTC m=+6.071035942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.846740 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894e0151d0044df openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:54.748939487 +0000 UTC m=+6.080494245,LastTimestamp:2026-02-17 00:08:54.748939487 +0000 UTC m=+6.080494245,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.853520 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 17 00:09:14 crc kubenswrapper[5109]: &Event{ObjectMeta:{kube-controller-manager-crc.1894e0172a18002f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 17 00:09:14 crc kubenswrapper[5109]: body: Feb 17 00:09:14 crc kubenswrapper[5109]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:09:03.558533167 +0000 UTC m=+14.890087925,LastTimestamp:2026-02-17 00:09:03.558533167 +0000 UTC m=+14.890087925,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 17 00:09:14 crc kubenswrapper[5109]: > Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.855759 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1894e0172a198178 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:09:03.5586318 +0000 UTC m=+14.890186578,LastTimestamp:2026-02-17 00:09:03.5586318 +0000 UTC m=+14.890186578,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.859026 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 17 00:09:14 crc kubenswrapper[5109]: &Event{ObjectMeta:{kube-apiserver-crc.1894e0174ff33efd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 17 00:09:14 crc kubenswrapper[5109]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 17 00:09:14 crc kubenswrapper[5109]: Feb 17 00:09:14 crc kubenswrapper[5109]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:09:04.193658621 +0000 UTC m=+15.525213389,LastTimestamp:2026-02-17 00:09:04.193658621 +0000 UTC m=+15.525213389,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 17 00:09:14 crc kubenswrapper[5109]: > Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.864118 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894e0174ff3dcf6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:09:04.193699062 +0000 UTC m=+15.525253830,LastTimestamp:2026-02-17 00:09:04.193699062 +0000 UTC m=+15.525253830,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.868141 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1894e0174ff33efd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 17 00:09:14 crc kubenswrapper[5109]: &Event{ObjectMeta:{kube-apiserver-crc.1894e0174ff33efd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 17 00:09:14 crc kubenswrapper[5109]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 17 00:09:14 crc kubenswrapper[5109]: Feb 17 00:09:14 crc kubenswrapper[5109]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:09:04.193658621 +0000 UTC m=+15.525213389,LastTimestamp:2026-02-17 00:09:04.201071053 +0000 UTC m=+15.532625821,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 17 00:09:14 crc kubenswrapper[5109]: > Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.872636 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1894e0174ff3dcf6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894e0174ff3dcf6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:09:04.193699062 +0000 UTC m=+15.525253830,LastTimestamp:2026-02-17 00:09:04.201111024 +0000 UTC m=+15.532665792,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.876793 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1894e01497040ba2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894e01497040ba2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:52.501040034 +0000 UTC m=+3.832594802,LastTimestamp:2026-02-17 00:09:04.577506098 +0000 UTC m=+15.909060856,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.884706 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1894e014a6d4a1cf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894e014a6d4a1cf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:52.766368207 +0000 UTC m=+4.097922955,LastTimestamp:2026-02-17 00:09:04.828953639 +0000 UTC m=+16.160508407,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.889359 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1894e014a78484e4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894e014a78484e4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:52.77789514 +0000 UTC m=+4.109449898,LastTimestamp:2026-02-17 00:09:04.840451882 +0000 UTC m=+16.172006650,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:14 crc kubenswrapper[5109]: E0217 00:09:14.895848 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894e019099a6a72 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:09:11.603366514 +0000 UTC m=+22.934921292,LastTimestamp:2026-02-17 00:09:11.603366514 +0000 UTC m=+22.934921292,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:15 crc kubenswrapper[5109]: I0217 00:09:15.370520 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:15 crc kubenswrapper[5109]: I0217 00:09:15.593939 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:09:15 crc kubenswrapper[5109]: I0217 00:09:15.595285 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:15 crc kubenswrapper[5109]: I0217 00:09:15.595353 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:15 crc kubenswrapper[5109]: I0217 00:09:15.595378 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:15 crc kubenswrapper[5109]: I0217 00:09:15.595419 5109 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 17 00:09:15 crc kubenswrapper[5109]: E0217 00:09:15.607649 5109 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 17 00:09:16 crc kubenswrapper[5109]: I0217 00:09:16.374748 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:17 crc kubenswrapper[5109]: I0217 00:09:17.371760 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:18 crc kubenswrapper[5109]: E0217 00:09:18.328451 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 17 00:09:18 crc kubenswrapper[5109]: I0217 00:09:18.372230 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:19 crc kubenswrapper[5109]: E0217 00:09:19.021278 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 17 00:09:19 crc kubenswrapper[5109]: I0217 00:09:19.373697 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:19 crc kubenswrapper[5109]: E0217 00:09:19.514812 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 17 00:09:19 crc kubenswrapper[5109]: E0217 00:09:19.535911 5109 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 00:09:19 crc kubenswrapper[5109]: E0217 00:09:19.611791 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 17 00:09:20 crc kubenswrapper[5109]: I0217 00:09:20.372675 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:20 crc kubenswrapper[5109]: I0217 00:09:20.520227 5109 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:09:20 crc kubenswrapper[5109]: I0217 00:09:20.520703 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:09:20 crc kubenswrapper[5109]: I0217 00:09:20.521996 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:20 crc kubenswrapper[5109]: I0217 00:09:20.522039 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:20 crc kubenswrapper[5109]: I0217 00:09:20.522050 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:20 crc kubenswrapper[5109]: E0217 00:09:20.522375 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:09:20 crc kubenswrapper[5109]: I0217 00:09:20.522668 5109 scope.go:117] "RemoveContainer" containerID="aeec71cf3d683cf9805abac3a495dcc38b9b4cc9c777d6a380929127eaee8b64" Feb 17 00:09:20 crc kubenswrapper[5109]: E0217 00:09:20.522883 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 17 00:09:20 crc kubenswrapper[5109]: E0217 00:09:20.527528 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1894e019099a6a72\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894e019099a6a72 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:09:11.603366514 +0000 UTC m=+22.934921292,LastTimestamp:2026-02-17 00:09:20.522831734 +0000 UTC m=+31.854386492,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:21 crc kubenswrapper[5109]: E0217 00:09:21.226981 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 17 00:09:21 crc kubenswrapper[5109]: I0217 00:09:21.373822 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:22 crc kubenswrapper[5109]: I0217 00:09:22.373187 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:22 crc kubenswrapper[5109]: I0217 00:09:22.607813 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:09:22 crc kubenswrapper[5109]: I0217 00:09:22.608909 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:22 crc kubenswrapper[5109]: I0217 00:09:22.608951 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:22 crc kubenswrapper[5109]: I0217 00:09:22.608963 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:22 crc kubenswrapper[5109]: I0217 00:09:22.608988 5109 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 17 00:09:22 crc kubenswrapper[5109]: E0217 00:09:22.621457 5109 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 17 00:09:23 crc kubenswrapper[5109]: I0217 00:09:23.373097 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:24 crc kubenswrapper[5109]: I0217 00:09:24.371198 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:25 crc kubenswrapper[5109]: I0217 00:09:25.371500 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:26 crc kubenswrapper[5109]: E0217 00:09:26.029718 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 17 00:09:26 crc kubenswrapper[5109]: I0217 00:09:26.370370 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:27 crc kubenswrapper[5109]: I0217 00:09:27.371914 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:28 crc kubenswrapper[5109]: I0217 00:09:28.371096 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:29 crc kubenswrapper[5109]: I0217 00:09:29.374350 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:29 crc kubenswrapper[5109]: E0217 00:09:29.536690 5109 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 00:09:29 crc kubenswrapper[5109]: I0217 00:09:29.621687 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:09:29 crc kubenswrapper[5109]: I0217 00:09:29.622834 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:29 crc kubenswrapper[5109]: I0217 00:09:29.622893 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:29 crc kubenswrapper[5109]: I0217 00:09:29.622909 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:29 crc kubenswrapper[5109]: I0217 00:09:29.622936 5109 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 17 00:09:29 crc kubenswrapper[5109]: E0217 00:09:29.635064 5109 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 17 00:09:30 crc kubenswrapper[5109]: I0217 00:09:30.371199 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:31 crc kubenswrapper[5109]: I0217 00:09:31.371218 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:32 crc kubenswrapper[5109]: I0217 00:09:32.371527 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:33 crc kubenswrapper[5109]: E0217 00:09:33.038374 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 17 00:09:33 crc kubenswrapper[5109]: E0217 00:09:33.133203 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 17 00:09:33 crc kubenswrapper[5109]: I0217 00:09:33.373344 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:33 crc kubenswrapper[5109]: I0217 00:09:33.463753 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:09:33 crc kubenswrapper[5109]: I0217 00:09:33.464637 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:33 crc kubenswrapper[5109]: I0217 00:09:33.464712 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:33 crc kubenswrapper[5109]: I0217 00:09:33.464739 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:33 crc kubenswrapper[5109]: E0217 00:09:33.465352 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:09:33 crc kubenswrapper[5109]: I0217 00:09:33.465812 5109 scope.go:117] "RemoveContainer" containerID="aeec71cf3d683cf9805abac3a495dcc38b9b4cc9c777d6a380929127eaee8b64" Feb 17 00:09:33 crc kubenswrapper[5109]: E0217 00:09:33.478469 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1894e01497040ba2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894e01497040ba2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:52.501040034 +0000 UTC m=+3.832594802,LastTimestamp:2026-02-17 00:09:33.467732347 +0000 UTC m=+44.799287145,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:33 crc kubenswrapper[5109]: E0217 00:09:33.798367 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1894e014a6d4a1cf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894e014a6d4a1cf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:52.766368207 +0000 UTC m=+4.097922955,LastTimestamp:2026-02-17 00:09:33.790237111 +0000 UTC m=+45.121791909,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:33 crc kubenswrapper[5109]: E0217 00:09:33.837862 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1894e014a78484e4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894e014a78484e4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:52.77789514 +0000 UTC m=+4.109449898,LastTimestamp:2026-02-17 00:09:33.829806825 +0000 UTC m=+45.161361623,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:34 crc kubenswrapper[5109]: I0217 00:09:34.370259 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:34 crc kubenswrapper[5109]: I0217 00:09:34.662006 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 17 00:09:34 crc kubenswrapper[5109]: I0217 00:09:34.664453 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"80fb77042b2cce721b7a583a6af40e9a96135b9b337336ff6cce7a019418039c"} Feb 17 00:09:34 crc kubenswrapper[5109]: I0217 00:09:34.664829 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:09:34 crc kubenswrapper[5109]: I0217 00:09:34.670272 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:34 crc kubenswrapper[5109]: I0217 00:09:34.670542 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:34 crc kubenswrapper[5109]: I0217 00:09:34.670558 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:34 crc kubenswrapper[5109]: E0217 00:09:34.671647 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:09:35 crc kubenswrapper[5109]: I0217 00:09:35.373447 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:35 crc kubenswrapper[5109]: I0217 00:09:35.669092 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 17 00:09:35 crc kubenswrapper[5109]: I0217 00:09:35.670002 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 17 00:09:35 crc kubenswrapper[5109]: I0217 00:09:35.672053 5109 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="80fb77042b2cce721b7a583a6af40e9a96135b9b337336ff6cce7a019418039c" exitCode=255 Feb 17 00:09:35 crc kubenswrapper[5109]: I0217 00:09:35.672147 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"80fb77042b2cce721b7a583a6af40e9a96135b9b337336ff6cce7a019418039c"} Feb 17 00:09:35 crc kubenswrapper[5109]: I0217 00:09:35.672205 5109 scope.go:117] "RemoveContainer" containerID="aeec71cf3d683cf9805abac3a495dcc38b9b4cc9c777d6a380929127eaee8b64" Feb 17 00:09:35 crc kubenswrapper[5109]: I0217 00:09:35.672450 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:09:35 crc kubenswrapper[5109]: I0217 00:09:35.673189 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:35 crc kubenswrapper[5109]: I0217 00:09:35.673233 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:35 crc kubenswrapper[5109]: I0217 00:09:35.673247 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:35 crc kubenswrapper[5109]: E0217 00:09:35.673678 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:09:35 crc kubenswrapper[5109]: I0217 00:09:35.673947 5109 scope.go:117] "RemoveContainer" containerID="80fb77042b2cce721b7a583a6af40e9a96135b9b337336ff6cce7a019418039c" Feb 17 00:09:35 crc kubenswrapper[5109]: E0217 00:09:35.674154 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 17 00:09:35 crc kubenswrapper[5109]: E0217 00:09:35.679139 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1894e019099a6a72\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894e019099a6a72 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:09:11.603366514 +0000 UTC m=+22.934921292,LastTimestamp:2026-02-17 00:09:35.674117598 +0000 UTC m=+47.005672366,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:36 crc kubenswrapper[5109]: I0217 00:09:36.372086 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:36 crc kubenswrapper[5109]: I0217 00:09:36.635835 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:09:36 crc kubenswrapper[5109]: I0217 00:09:36.637062 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:36 crc kubenswrapper[5109]: I0217 00:09:36.637130 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:36 crc kubenswrapper[5109]: I0217 00:09:36.637151 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:36 crc kubenswrapper[5109]: I0217 00:09:36.637191 5109 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 17 00:09:36 crc kubenswrapper[5109]: E0217 00:09:36.653675 5109 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 17 00:09:36 crc kubenswrapper[5109]: I0217 00:09:36.677738 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 17 00:09:37 crc kubenswrapper[5109]: I0217 00:09:37.374816 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:38 crc kubenswrapper[5109]: I0217 00:09:38.374117 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:39 crc kubenswrapper[5109]: I0217 00:09:39.371180 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:39 crc kubenswrapper[5109]: E0217 00:09:39.537397 5109 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 00:09:39 crc kubenswrapper[5109]: E0217 00:09:39.998785 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 17 00:09:40 crc kubenswrapper[5109]: E0217 00:09:40.045782 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 17 00:09:40 crc kubenswrapper[5109]: I0217 00:09:40.370948 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:40 crc kubenswrapper[5109]: I0217 00:09:40.520485 5109 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:09:40 crc kubenswrapper[5109]: I0217 00:09:40.520798 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:09:40 crc kubenswrapper[5109]: I0217 00:09:40.522466 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:40 crc kubenswrapper[5109]: I0217 00:09:40.522534 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:40 crc kubenswrapper[5109]: I0217 00:09:40.522559 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:40 crc kubenswrapper[5109]: E0217 00:09:40.523159 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:09:40 crc kubenswrapper[5109]: I0217 00:09:40.523811 5109 scope.go:117] "RemoveContainer" containerID="80fb77042b2cce721b7a583a6af40e9a96135b9b337336ff6cce7a019418039c" Feb 17 00:09:40 crc kubenswrapper[5109]: E0217 00:09:40.524169 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 17 00:09:40 crc kubenswrapper[5109]: E0217 00:09:40.532426 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1894e019099a6a72\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894e019099a6a72 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:09:11.603366514 +0000 UTC m=+22.934921292,LastTimestamp:2026-02-17 00:09:40.524117982 +0000 UTC m=+51.855672780,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:41 crc kubenswrapper[5109]: I0217 00:09:41.374229 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:41 crc kubenswrapper[5109]: E0217 00:09:41.683811 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 17 00:09:42 crc kubenswrapper[5109]: I0217 00:09:42.373673 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:43 crc kubenswrapper[5109]: I0217 00:09:43.373398 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:43 crc kubenswrapper[5109]: I0217 00:09:43.654478 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:09:43 crc kubenswrapper[5109]: I0217 00:09:43.655628 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:43 crc kubenswrapper[5109]: I0217 00:09:43.655665 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:43 crc kubenswrapper[5109]: I0217 00:09:43.655678 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:43 crc kubenswrapper[5109]: I0217 00:09:43.655700 5109 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 17 00:09:43 crc kubenswrapper[5109]: E0217 00:09:43.666366 5109 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 17 00:09:44 crc kubenswrapper[5109]: I0217 00:09:44.374291 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:44 crc kubenswrapper[5109]: E0217 00:09:44.484148 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 17 00:09:44 crc kubenswrapper[5109]: I0217 00:09:44.543621 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 00:09:44 crc kubenswrapper[5109]: I0217 00:09:44.543894 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:09:44 crc kubenswrapper[5109]: I0217 00:09:44.544787 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:44 crc kubenswrapper[5109]: I0217 00:09:44.544846 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:44 crc kubenswrapper[5109]: I0217 00:09:44.544873 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:44 crc kubenswrapper[5109]: E0217 00:09:44.545346 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:09:44 crc kubenswrapper[5109]: I0217 00:09:44.665978 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:09:44 crc kubenswrapper[5109]: I0217 00:09:44.666184 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:09:44 crc kubenswrapper[5109]: I0217 00:09:44.667116 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:44 crc kubenswrapper[5109]: I0217 00:09:44.667173 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:44 crc kubenswrapper[5109]: I0217 00:09:44.667187 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:44 crc kubenswrapper[5109]: E0217 00:09:44.667632 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:09:44 crc kubenswrapper[5109]: I0217 00:09:44.667894 5109 scope.go:117] "RemoveContainer" containerID="80fb77042b2cce721b7a583a6af40e9a96135b9b337336ff6cce7a019418039c" Feb 17 00:09:44 crc kubenswrapper[5109]: E0217 00:09:44.668099 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 17 00:09:44 crc kubenswrapper[5109]: E0217 00:09:44.672905 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1894e019099a6a72\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894e019099a6a72 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:09:11.603366514 +0000 UTC m=+22.934921292,LastTimestamp:2026-02-17 00:09:44.668070301 +0000 UTC m=+55.999625059,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:45 crc kubenswrapper[5109]: I0217 00:09:45.372819 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:46 crc kubenswrapper[5109]: I0217 00:09:46.375152 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:47 crc kubenswrapper[5109]: E0217 00:09:47.052304 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 17 00:09:47 crc kubenswrapper[5109]: I0217 00:09:47.375455 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:48 crc kubenswrapper[5109]: I0217 00:09:48.371862 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:49 crc kubenswrapper[5109]: I0217 00:09:49.370878 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:49 crc kubenswrapper[5109]: E0217 00:09:49.538215 5109 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 00:09:50 crc kubenswrapper[5109]: I0217 00:09:50.371624 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:50 crc kubenswrapper[5109]: I0217 00:09:50.666512 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:09:50 crc kubenswrapper[5109]: I0217 00:09:50.668072 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:50 crc kubenswrapper[5109]: I0217 00:09:50.668181 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:50 crc kubenswrapper[5109]: I0217 00:09:50.668202 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:50 crc kubenswrapper[5109]: I0217 00:09:50.668240 5109 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 17 00:09:50 crc kubenswrapper[5109]: E0217 00:09:50.682794 5109 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 17 00:09:51 crc kubenswrapper[5109]: I0217 00:09:51.373084 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:52 crc kubenswrapper[5109]: I0217 00:09:52.371646 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:53 crc kubenswrapper[5109]: I0217 00:09:53.373807 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:54 crc kubenswrapper[5109]: E0217 00:09:54.060139 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 17 00:09:54 crc kubenswrapper[5109]: I0217 00:09:54.372816 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:55 crc kubenswrapper[5109]: I0217 00:09:55.370935 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:56 crc kubenswrapper[5109]: I0217 00:09:56.371303 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:56 crc kubenswrapper[5109]: I0217 00:09:56.463972 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:09:56 crc kubenswrapper[5109]: I0217 00:09:56.465028 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:56 crc kubenswrapper[5109]: I0217 00:09:56.465093 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:56 crc kubenswrapper[5109]: I0217 00:09:56.465122 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:56 crc kubenswrapper[5109]: E0217 00:09:56.465902 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:09:56 crc kubenswrapper[5109]: I0217 00:09:56.466302 5109 scope.go:117] "RemoveContainer" containerID="80fb77042b2cce721b7a583a6af40e9a96135b9b337336ff6cce7a019418039c" Feb 17 00:09:56 crc kubenswrapper[5109]: E0217 00:09:56.479372 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1894e01497040ba2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894e01497040ba2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:08:52.501040034 +0000 UTC m=+3.832594802,LastTimestamp:2026-02-17 00:09:56.468226859 +0000 UTC m=+67.799781657,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:09:56 crc kubenswrapper[5109]: I0217 00:09:56.735695 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 17 00:09:56 crc kubenswrapper[5109]: I0217 00:09:56.737560 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"2714abd11f9fcc5bf7a3130bf396a8407088c58ac080791afb94faa2aeb1d8ef"} Feb 17 00:09:56 crc kubenswrapper[5109]: I0217 00:09:56.737813 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:09:56 crc kubenswrapper[5109]: I0217 00:09:56.738511 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:56 crc kubenswrapper[5109]: I0217 00:09:56.738620 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:56 crc kubenswrapper[5109]: I0217 00:09:56.738638 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:56 crc kubenswrapper[5109]: E0217 00:09:56.739181 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:09:57 crc kubenswrapper[5109]: I0217 00:09:57.371532 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 17 00:09:57 crc kubenswrapper[5109]: I0217 00:09:57.410957 5109 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-7hl9t" Feb 17 00:09:57 crc kubenswrapper[5109]: I0217 00:09:57.418053 5109 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-7hl9t" Feb 17 00:09:57 crc kubenswrapper[5109]: I0217 00:09:57.511156 5109 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 17 00:09:57 crc kubenswrapper[5109]: I0217 00:09:57.683265 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:09:57 crc kubenswrapper[5109]: I0217 00:09:57.684617 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:57 crc kubenswrapper[5109]: I0217 00:09:57.684682 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:57 crc kubenswrapper[5109]: I0217 00:09:57.684698 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:57 crc kubenswrapper[5109]: I0217 00:09:57.684874 5109 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 17 00:09:57 crc kubenswrapper[5109]: I0217 00:09:57.693775 5109 kubelet_node_status.go:127] "Node was previously registered" node="crc" Feb 17 00:09:57 crc kubenswrapper[5109]: I0217 00:09:57.694294 5109 kubelet_node_status.go:81] "Successfully registered node" node="crc" Feb 17 00:09:57 crc kubenswrapper[5109]: E0217 00:09:57.694326 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 17 00:09:57 crc kubenswrapper[5109]: I0217 00:09:57.697825 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:57 crc kubenswrapper[5109]: I0217 00:09:57.698465 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:57 crc kubenswrapper[5109]: I0217 00:09:57.698491 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:57 crc kubenswrapper[5109]: I0217 00:09:57.698514 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:09:57 crc kubenswrapper[5109]: I0217 00:09:57.698528 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:09:57Z","lastTransitionTime":"2026-02-17T00:09:57Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 17 00:09:57 crc kubenswrapper[5109]: E0217 00:09:57.710799 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:09:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:09:57Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"351c3bed-1bde-4016-be98-c82504203bf7\\\",\\\"systemUUID\\\":\\\"85fb0ff0-40b9-49c9-951f-8aba64a9d9fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:09:57 crc kubenswrapper[5109]: I0217 00:09:57.720100 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:57 crc kubenswrapper[5109]: I0217 00:09:57.720139 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:57 crc kubenswrapper[5109]: I0217 00:09:57.720151 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:57 crc kubenswrapper[5109]: I0217 00:09:57.720207 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:09:57 crc kubenswrapper[5109]: I0217 00:09:57.720234 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:09:57Z","lastTransitionTime":"2026-02-17T00:09:57Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 17 00:09:57 crc kubenswrapper[5109]: E0217 00:09:57.729401 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:09:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:09:57Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"351c3bed-1bde-4016-be98-c82504203bf7\\\",\\\"systemUUID\\\":\\\"85fb0ff0-40b9-49c9-951f-8aba64a9d9fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:09:57 crc kubenswrapper[5109]: I0217 00:09:57.757321 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:57 crc kubenswrapper[5109]: I0217 00:09:57.757355 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:57 crc kubenswrapper[5109]: I0217 00:09:57.757364 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:57 crc kubenswrapper[5109]: I0217 00:09:57.757379 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:09:57 crc kubenswrapper[5109]: I0217 00:09:57.757389 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:09:57Z","lastTransitionTime":"2026-02-17T00:09:57Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 17 00:09:57 crc kubenswrapper[5109]: E0217 00:09:57.772052 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:09:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:09:57Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"351c3bed-1bde-4016-be98-c82504203bf7\\\",\\\"systemUUID\\\":\\\"85fb0ff0-40b9-49c9-951f-8aba64a9d9fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:09:57 crc kubenswrapper[5109]: I0217 00:09:57.783183 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:57 crc kubenswrapper[5109]: I0217 00:09:57.783229 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:57 crc kubenswrapper[5109]: I0217 00:09:57.783238 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:57 crc kubenswrapper[5109]: I0217 00:09:57.783255 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:09:57 crc kubenswrapper[5109]: I0217 00:09:57.783267 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:09:57Z","lastTransitionTime":"2026-02-17T00:09:57Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 17 00:09:57 crc kubenswrapper[5109]: E0217 00:09:57.795730 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:09:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:09:57Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"351c3bed-1bde-4016-be98-c82504203bf7\\\",\\\"systemUUID\\\":\\\"85fb0ff0-40b9-49c9-951f-8aba64a9d9fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:09:57 crc kubenswrapper[5109]: E0217 00:09:57.795904 5109 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 17 00:09:57 crc kubenswrapper[5109]: E0217 00:09:57.795938 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:09:57 crc kubenswrapper[5109]: E0217 00:09:57.897145 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:09:57 crc kubenswrapper[5109]: E0217 00:09:57.997751 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:09:58 crc kubenswrapper[5109]: E0217 00:09:58.098941 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:09:58 crc kubenswrapper[5109]: I0217 00:09:58.179263 5109 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 17 00:09:58 crc kubenswrapper[5109]: E0217 00:09:58.199508 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:09:58 crc kubenswrapper[5109]: E0217 00:09:58.299867 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:09:58 crc kubenswrapper[5109]: E0217 00:09:58.399978 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:09:58 crc kubenswrapper[5109]: I0217 00:09:58.420423 5109 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-03-19 00:04:57 +0000 UTC" deadline="2026-03-12 15:07:41.992188155 +0000 UTC" Feb 17 00:09:58 crc kubenswrapper[5109]: I0217 00:09:58.420497 5109 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="566h57m43.571694853s" Feb 17 00:09:58 crc kubenswrapper[5109]: E0217 00:09:58.501014 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:09:58 crc kubenswrapper[5109]: E0217 00:09:58.602142 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:09:58 crc kubenswrapper[5109]: E0217 00:09:58.702837 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:09:58 crc kubenswrapper[5109]: I0217 00:09:58.757080 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 17 00:09:58 crc kubenswrapper[5109]: I0217 00:09:58.757920 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 17 00:09:58 crc kubenswrapper[5109]: I0217 00:09:58.759318 5109 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="2714abd11f9fcc5bf7a3130bf396a8407088c58ac080791afb94faa2aeb1d8ef" exitCode=255 Feb 17 00:09:58 crc kubenswrapper[5109]: I0217 00:09:58.759372 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"2714abd11f9fcc5bf7a3130bf396a8407088c58ac080791afb94faa2aeb1d8ef"} Feb 17 00:09:58 crc kubenswrapper[5109]: I0217 00:09:58.759412 5109 scope.go:117] "RemoveContainer" containerID="80fb77042b2cce721b7a583a6af40e9a96135b9b337336ff6cce7a019418039c" Feb 17 00:09:58 crc kubenswrapper[5109]: I0217 00:09:58.759666 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:09:58 crc kubenswrapper[5109]: I0217 00:09:58.760405 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:09:58 crc kubenswrapper[5109]: I0217 00:09:58.760435 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:09:58 crc kubenswrapper[5109]: I0217 00:09:58.760449 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:09:58 crc kubenswrapper[5109]: E0217 00:09:58.760841 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:09:58 crc kubenswrapper[5109]: I0217 00:09:58.761099 5109 scope.go:117] "RemoveContainer" containerID="2714abd11f9fcc5bf7a3130bf396a8407088c58ac080791afb94faa2aeb1d8ef" Feb 17 00:09:58 crc kubenswrapper[5109]: E0217 00:09:58.761325 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 17 00:09:58 crc kubenswrapper[5109]: E0217 00:09:58.803772 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:09:58 crc kubenswrapper[5109]: E0217 00:09:58.904405 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:09:59 crc kubenswrapper[5109]: E0217 00:09:59.005321 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:09:59 crc kubenswrapper[5109]: E0217 00:09:59.106107 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:09:59 crc kubenswrapper[5109]: E0217 00:09:59.206652 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:09:59 crc kubenswrapper[5109]: E0217 00:09:59.307558 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:09:59 crc kubenswrapper[5109]: E0217 00:09:59.408399 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:09:59 crc kubenswrapper[5109]: E0217 00:09:59.509083 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:09:59 crc kubenswrapper[5109]: E0217 00:09:59.538847 5109 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 00:09:59 crc kubenswrapper[5109]: E0217 00:09:59.609868 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:09:59 crc kubenswrapper[5109]: E0217 00:09:59.710168 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:09:59 crc kubenswrapper[5109]: I0217 00:09:59.764706 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 17 00:09:59 crc kubenswrapper[5109]: E0217 00:09:59.810288 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:09:59 crc kubenswrapper[5109]: E0217 00:09:59.910360 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:00 crc kubenswrapper[5109]: E0217 00:10:00.010826 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:00 crc kubenswrapper[5109]: E0217 00:10:00.111193 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:00 crc kubenswrapper[5109]: E0217 00:10:00.212095 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:00 crc kubenswrapper[5109]: E0217 00:10:00.312789 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:00 crc kubenswrapper[5109]: E0217 00:10:00.413007 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:00 crc kubenswrapper[5109]: E0217 00:10:00.513409 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:00 crc kubenswrapper[5109]: I0217 00:10:00.520764 5109 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:10:00 crc kubenswrapper[5109]: I0217 00:10:00.521127 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:10:00 crc kubenswrapper[5109]: I0217 00:10:00.522189 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:00 crc kubenswrapper[5109]: I0217 00:10:00.522226 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:00 crc kubenswrapper[5109]: I0217 00:10:00.522239 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:00 crc kubenswrapper[5109]: E0217 00:10:00.522661 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:10:00 crc kubenswrapper[5109]: I0217 00:10:00.522867 5109 scope.go:117] "RemoveContainer" containerID="2714abd11f9fcc5bf7a3130bf396a8407088c58ac080791afb94faa2aeb1d8ef" Feb 17 00:10:00 crc kubenswrapper[5109]: E0217 00:10:00.523051 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 17 00:10:00 crc kubenswrapper[5109]: E0217 00:10:00.614145 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:00 crc kubenswrapper[5109]: E0217 00:10:00.714329 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:00 crc kubenswrapper[5109]: E0217 00:10:00.815188 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:00 crc kubenswrapper[5109]: E0217 00:10:00.915916 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:01 crc kubenswrapper[5109]: E0217 00:10:01.016737 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:01 crc kubenswrapper[5109]: E0217 00:10:01.117295 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:01 crc kubenswrapper[5109]: E0217 00:10:01.218286 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:01 crc kubenswrapper[5109]: E0217 00:10:01.318892 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:01 crc kubenswrapper[5109]: E0217 00:10:01.418989 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:01 crc kubenswrapper[5109]: E0217 00:10:01.519355 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:01 crc kubenswrapper[5109]: E0217 00:10:01.619417 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:01 crc kubenswrapper[5109]: I0217 00:10:01.640220 5109 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Feb 17 00:10:01 crc kubenswrapper[5109]: E0217 00:10:01.720110 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:01 crc kubenswrapper[5109]: E0217 00:10:01.820892 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:01 crc kubenswrapper[5109]: E0217 00:10:01.921751 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:02 crc kubenswrapper[5109]: E0217 00:10:02.022667 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:02 crc kubenswrapper[5109]: E0217 00:10:02.123643 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:02 crc kubenswrapper[5109]: E0217 00:10:02.223756 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:02 crc kubenswrapper[5109]: E0217 00:10:02.324491 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:02 crc kubenswrapper[5109]: E0217 00:10:02.424581 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:02 crc kubenswrapper[5109]: E0217 00:10:02.525703 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:02 crc kubenswrapper[5109]: E0217 00:10:02.626447 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:02 crc kubenswrapper[5109]: E0217 00:10:02.727573 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:02 crc kubenswrapper[5109]: E0217 00:10:02.828397 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:02 crc kubenswrapper[5109]: E0217 00:10:02.929469 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:03 crc kubenswrapper[5109]: E0217 00:10:03.030724 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:03 crc kubenswrapper[5109]: E0217 00:10:03.131846 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:03 crc kubenswrapper[5109]: E0217 00:10:03.232579 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:03 crc kubenswrapper[5109]: E0217 00:10:03.332936 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:03 crc kubenswrapper[5109]: E0217 00:10:03.433474 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:03 crc kubenswrapper[5109]: E0217 00:10:03.534010 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:03 crc kubenswrapper[5109]: E0217 00:10:03.635049 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:03 crc kubenswrapper[5109]: E0217 00:10:03.735927 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:03 crc kubenswrapper[5109]: E0217 00:10:03.836293 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:03 crc kubenswrapper[5109]: E0217 00:10:03.937392 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:04 crc kubenswrapper[5109]: E0217 00:10:04.038384 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:04 crc kubenswrapper[5109]: E0217 00:10:04.138956 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:04 crc kubenswrapper[5109]: E0217 00:10:04.239818 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:04 crc kubenswrapper[5109]: E0217 00:10:04.340890 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:04 crc kubenswrapper[5109]: E0217 00:10:04.441627 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:04 crc kubenswrapper[5109]: I0217 00:10:04.464069 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:10:04 crc kubenswrapper[5109]: I0217 00:10:04.464728 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:04 crc kubenswrapper[5109]: I0217 00:10:04.464755 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:04 crc kubenswrapper[5109]: I0217 00:10:04.464765 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:04 crc kubenswrapper[5109]: E0217 00:10:04.465025 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:10:04 crc kubenswrapper[5109]: E0217 00:10:04.542199 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:04 crc kubenswrapper[5109]: E0217 00:10:04.643034 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:04 crc kubenswrapper[5109]: E0217 00:10:04.743122 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:04 crc kubenswrapper[5109]: E0217 00:10:04.843344 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:04 crc kubenswrapper[5109]: E0217 00:10:04.944369 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:05 crc kubenswrapper[5109]: E0217 00:10:05.045286 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:05 crc kubenswrapper[5109]: E0217 00:10:05.146265 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:05 crc kubenswrapper[5109]: E0217 00:10:05.246375 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:05 crc kubenswrapper[5109]: E0217 00:10:05.346643 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:05 crc kubenswrapper[5109]: E0217 00:10:05.447677 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:05 crc kubenswrapper[5109]: E0217 00:10:05.548288 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:05 crc kubenswrapper[5109]: E0217 00:10:05.648705 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:05 crc kubenswrapper[5109]: E0217 00:10:05.749019 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:05 crc kubenswrapper[5109]: E0217 00:10:05.850064 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:05 crc kubenswrapper[5109]: E0217 00:10:05.950958 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:06 crc kubenswrapper[5109]: E0217 00:10:06.052015 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:06 crc kubenswrapper[5109]: E0217 00:10:06.153097 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:06 crc kubenswrapper[5109]: E0217 00:10:06.253441 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:06 crc kubenswrapper[5109]: E0217 00:10:06.354580 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:06 crc kubenswrapper[5109]: E0217 00:10:06.455674 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:06 crc kubenswrapper[5109]: E0217 00:10:06.555869 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:06 crc kubenswrapper[5109]: E0217 00:10:06.656870 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:06 crc kubenswrapper[5109]: I0217 00:10:06.738668 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:10:06 crc kubenswrapper[5109]: I0217 00:10:06.738975 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:10:06 crc kubenswrapper[5109]: I0217 00:10:06.740022 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:06 crc kubenswrapper[5109]: I0217 00:10:06.740122 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:06 crc kubenswrapper[5109]: I0217 00:10:06.740152 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:06 crc kubenswrapper[5109]: E0217 00:10:06.740982 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:10:06 crc kubenswrapper[5109]: I0217 00:10:06.741393 5109 scope.go:117] "RemoveContainer" containerID="2714abd11f9fcc5bf7a3130bf396a8407088c58ac080791afb94faa2aeb1d8ef" Feb 17 00:10:06 crc kubenswrapper[5109]: E0217 00:10:06.741778 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 17 00:10:06 crc kubenswrapper[5109]: E0217 00:10:06.757752 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:06 crc kubenswrapper[5109]: E0217 00:10:06.858896 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:06 crc kubenswrapper[5109]: E0217 00:10:06.960051 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:07 crc kubenswrapper[5109]: E0217 00:10:07.060159 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:07 crc kubenswrapper[5109]: E0217 00:10:07.160700 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:07 crc kubenswrapper[5109]: E0217 00:10:07.261006 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:07 crc kubenswrapper[5109]: E0217 00:10:07.362010 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:07 crc kubenswrapper[5109]: E0217 00:10:07.462780 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:07 crc kubenswrapper[5109]: E0217 00:10:07.563926 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:07 crc kubenswrapper[5109]: E0217 00:10:07.664342 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:07 crc kubenswrapper[5109]: E0217 00:10:07.764765 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:07 crc kubenswrapper[5109]: E0217 00:10:07.865901 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:07 crc kubenswrapper[5109]: E0217 00:10:07.966100 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:08 crc kubenswrapper[5109]: E0217 00:10:08.066969 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:08 crc kubenswrapper[5109]: E0217 00:10:08.126892 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 17 00:10:08 crc kubenswrapper[5109]: I0217 00:10:08.131765 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:08 crc kubenswrapper[5109]: I0217 00:10:08.131840 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:08 crc kubenswrapper[5109]: I0217 00:10:08.131866 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:08 crc kubenswrapper[5109]: I0217 00:10:08.131898 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:08 crc kubenswrapper[5109]: I0217 00:10:08.131924 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:08Z","lastTransitionTime":"2026-02-17T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:08 crc kubenswrapper[5109]: E0217 00:10:08.146911 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"351c3bed-1bde-4016-be98-c82504203bf7\\\",\\\"systemUUID\\\":\\\"85fb0ff0-40b9-49c9-951f-8aba64a9d9fd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:08 crc kubenswrapper[5109]: I0217 00:10:08.150323 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:08 crc kubenswrapper[5109]: I0217 00:10:08.150377 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:08 crc kubenswrapper[5109]: I0217 00:10:08.150390 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:08 crc kubenswrapper[5109]: I0217 00:10:08.150409 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:08 crc kubenswrapper[5109]: I0217 00:10:08.150422 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:08Z","lastTransitionTime":"2026-02-17T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:08 crc kubenswrapper[5109]: E0217 00:10:08.160331 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"351c3bed-1bde-4016-be98-c82504203bf7\\\",\\\"systemUUID\\\":\\\"85fb0ff0-40b9-49c9-951f-8aba64a9d9fd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:08 crc kubenswrapper[5109]: I0217 00:10:08.163389 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:08 crc kubenswrapper[5109]: I0217 00:10:08.163428 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:08 crc kubenswrapper[5109]: I0217 00:10:08.163440 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:08 crc kubenswrapper[5109]: I0217 00:10:08.163456 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:08 crc kubenswrapper[5109]: I0217 00:10:08.163467 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:08Z","lastTransitionTime":"2026-02-17T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:08 crc kubenswrapper[5109]: E0217 00:10:08.172027 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"351c3bed-1bde-4016-be98-c82504203bf7\\\",\\\"systemUUID\\\":\\\"85fb0ff0-40b9-49c9-951f-8aba64a9d9fd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:08 crc kubenswrapper[5109]: I0217 00:10:08.174885 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:08 crc kubenswrapper[5109]: I0217 00:10:08.174927 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:08 crc kubenswrapper[5109]: I0217 00:10:08.174939 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:08 crc kubenswrapper[5109]: I0217 00:10:08.174977 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:08 crc kubenswrapper[5109]: I0217 00:10:08.174990 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:08Z","lastTransitionTime":"2026-02-17T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:08 crc kubenswrapper[5109]: E0217 00:10:08.183643 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"351c3bed-1bde-4016-be98-c82504203bf7\\\",\\\"systemUUID\\\":\\\"85fb0ff0-40b9-49c9-951f-8aba64a9d9fd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:08 crc kubenswrapper[5109]: E0217 00:10:08.183969 5109 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 17 00:10:08 crc kubenswrapper[5109]: E0217 00:10:08.184003 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:08 crc kubenswrapper[5109]: E0217 00:10:08.284506 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:08 crc kubenswrapper[5109]: E0217 00:10:08.384672 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:08 crc kubenswrapper[5109]: E0217 00:10:08.485066 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:08 crc kubenswrapper[5109]: E0217 00:10:08.586047 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:08 crc kubenswrapper[5109]: E0217 00:10:08.686687 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:08 crc kubenswrapper[5109]: E0217 00:10:08.786874 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:08 crc kubenswrapper[5109]: E0217 00:10:08.887957 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:08 crc kubenswrapper[5109]: E0217 00:10:08.988654 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:09 crc kubenswrapper[5109]: E0217 00:10:09.089354 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:09 crc kubenswrapper[5109]: E0217 00:10:09.189798 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:09 crc kubenswrapper[5109]: E0217 00:10:09.289980 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:09 crc kubenswrapper[5109]: E0217 00:10:09.391208 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:09 crc kubenswrapper[5109]: E0217 00:10:09.492193 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:09 crc kubenswrapper[5109]: E0217 00:10:09.540008 5109 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 00:10:09 crc kubenswrapper[5109]: E0217 00:10:09.593289 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:09 crc kubenswrapper[5109]: E0217 00:10:09.694194 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:09 crc kubenswrapper[5109]: E0217 00:10:09.794459 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:09 crc kubenswrapper[5109]: E0217 00:10:09.894779 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:09 crc kubenswrapper[5109]: E0217 00:10:09.995931 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:10 crc kubenswrapper[5109]: E0217 00:10:10.096766 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:10 crc kubenswrapper[5109]: E0217 00:10:10.197057 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:10 crc kubenswrapper[5109]: E0217 00:10:10.297195 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:10 crc kubenswrapper[5109]: E0217 00:10:10.397805 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:10 crc kubenswrapper[5109]: E0217 00:10:10.498463 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:10 crc kubenswrapper[5109]: E0217 00:10:10.599489 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:10 crc kubenswrapper[5109]: E0217 00:10:10.700409 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:10 crc kubenswrapper[5109]: E0217 00:10:10.800809 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:10 crc kubenswrapper[5109]: E0217 00:10:10.901208 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:11 crc kubenswrapper[5109]: E0217 00:10:11.001989 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:11 crc kubenswrapper[5109]: E0217 00:10:11.102344 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:11 crc kubenswrapper[5109]: E0217 00:10:11.203299 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:11 crc kubenswrapper[5109]: E0217 00:10:11.303912 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:11 crc kubenswrapper[5109]: E0217 00:10:11.404243 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:11 crc kubenswrapper[5109]: I0217 00:10:11.464131 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:10:11 crc kubenswrapper[5109]: I0217 00:10:11.464999 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:11 crc kubenswrapper[5109]: I0217 00:10:11.465084 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:11 crc kubenswrapper[5109]: I0217 00:10:11.465118 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:11 crc kubenswrapper[5109]: E0217 00:10:11.465897 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:10:11 crc kubenswrapper[5109]: E0217 00:10:11.504903 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:11 crc kubenswrapper[5109]: E0217 00:10:11.605413 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:11 crc kubenswrapper[5109]: E0217 00:10:11.706489 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:11 crc kubenswrapper[5109]: E0217 00:10:11.807245 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:11 crc kubenswrapper[5109]: E0217 00:10:11.907360 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:12 crc kubenswrapper[5109]: E0217 00:10:12.008494 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:12 crc kubenswrapper[5109]: E0217 00:10:12.109486 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:12 crc kubenswrapper[5109]: E0217 00:10:12.210050 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:12 crc kubenswrapper[5109]: E0217 00:10:12.311121 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:12 crc kubenswrapper[5109]: E0217 00:10:12.411477 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:12 crc kubenswrapper[5109]: E0217 00:10:12.511647 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:12 crc kubenswrapper[5109]: E0217 00:10:12.612304 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:12 crc kubenswrapper[5109]: E0217 00:10:12.713422 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:12 crc kubenswrapper[5109]: E0217 00:10:12.813824 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:12 crc kubenswrapper[5109]: E0217 00:10:12.914772 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:13 crc kubenswrapper[5109]: E0217 00:10:13.015779 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:13 crc kubenswrapper[5109]: E0217 00:10:13.116395 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:13 crc kubenswrapper[5109]: E0217 00:10:13.217399 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:13 crc kubenswrapper[5109]: E0217 00:10:13.317722 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:13 crc kubenswrapper[5109]: E0217 00:10:13.418116 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:13 crc kubenswrapper[5109]: E0217 00:10:13.518225 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:13 crc kubenswrapper[5109]: E0217 00:10:13.618372 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:13 crc kubenswrapper[5109]: E0217 00:10:13.718925 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:13 crc kubenswrapper[5109]: E0217 00:10:13.819749 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:13 crc kubenswrapper[5109]: E0217 00:10:13.920222 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:14 crc kubenswrapper[5109]: E0217 00:10:14.020427 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:14 crc kubenswrapper[5109]: E0217 00:10:14.121730 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:14 crc kubenswrapper[5109]: E0217 00:10:14.222674 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:14 crc kubenswrapper[5109]: E0217 00:10:14.323712 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:14 crc kubenswrapper[5109]: E0217 00:10:14.424657 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:14 crc kubenswrapper[5109]: E0217 00:10:14.525110 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:14 crc kubenswrapper[5109]: E0217 00:10:14.625724 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:14 crc kubenswrapper[5109]: E0217 00:10:14.726419 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:14 crc kubenswrapper[5109]: E0217 00:10:14.826823 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:14 crc kubenswrapper[5109]: E0217 00:10:14.927083 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:15 crc kubenswrapper[5109]: E0217 00:10:15.028207 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:15 crc kubenswrapper[5109]: E0217 00:10:15.129331 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:15 crc kubenswrapper[5109]: E0217 00:10:15.229562 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:15 crc kubenswrapper[5109]: E0217 00:10:15.330577 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:15 crc kubenswrapper[5109]: E0217 00:10:15.431689 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:15 crc kubenswrapper[5109]: E0217 00:10:15.532402 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:15 crc kubenswrapper[5109]: E0217 00:10:15.633697 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:15 crc kubenswrapper[5109]: E0217 00:10:15.733916 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:15 crc kubenswrapper[5109]: E0217 00:10:15.834645 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:15 crc kubenswrapper[5109]: E0217 00:10:15.935649 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:16 crc kubenswrapper[5109]: E0217 00:10:16.036413 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:16 crc kubenswrapper[5109]: E0217 00:10:16.137527 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:16 crc kubenswrapper[5109]: E0217 00:10:16.237920 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:16 crc kubenswrapper[5109]: E0217 00:10:16.338553 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:16 crc kubenswrapper[5109]: E0217 00:10:16.439061 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:16 crc kubenswrapper[5109]: E0217 00:10:16.539737 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:16 crc kubenswrapper[5109]: E0217 00:10:16.639955 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:16 crc kubenswrapper[5109]: E0217 00:10:16.740463 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:16 crc kubenswrapper[5109]: E0217 00:10:16.841180 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:16 crc kubenswrapper[5109]: E0217 00:10:16.941874 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:17 crc kubenswrapper[5109]: E0217 00:10:17.042772 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:17 crc kubenswrapper[5109]: E0217 00:10:17.143845 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:17 crc kubenswrapper[5109]: E0217 00:10:17.244917 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:17 crc kubenswrapper[5109]: E0217 00:10:17.345410 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:17 crc kubenswrapper[5109]: E0217 00:10:17.446705 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:17 crc kubenswrapper[5109]: E0217 00:10:17.546831 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:17 crc kubenswrapper[5109]: E0217 00:10:17.648041 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:17 crc kubenswrapper[5109]: E0217 00:10:17.748464 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:17 crc kubenswrapper[5109]: E0217 00:10:17.849403 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:17 crc kubenswrapper[5109]: E0217 00:10:17.950422 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:18 crc kubenswrapper[5109]: E0217 00:10:18.051575 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:18 crc kubenswrapper[5109]: E0217 00:10:18.152679 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:18 crc kubenswrapper[5109]: E0217 00:10:18.253455 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:18 crc kubenswrapper[5109]: E0217 00:10:18.354686 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:18 crc kubenswrapper[5109]: E0217 00:10:18.447395 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 17 00:10:18 crc kubenswrapper[5109]: I0217 00:10:18.451940 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:18 crc kubenswrapper[5109]: I0217 00:10:18.452186 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:18 crc kubenswrapper[5109]: I0217 00:10:18.452337 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:18 crc kubenswrapper[5109]: I0217 00:10:18.452473 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:18 crc kubenswrapper[5109]: I0217 00:10:18.452643 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:18Z","lastTransitionTime":"2026-02-17T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:18 crc kubenswrapper[5109]: I0217 00:10:18.463661 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 17 00:10:18 crc kubenswrapper[5109]: I0217 00:10:18.464844 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:18 crc kubenswrapper[5109]: I0217 00:10:18.464928 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:18 crc kubenswrapper[5109]: I0217 00:10:18.464955 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:18 crc kubenswrapper[5109]: E0217 00:10:18.465952 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 17 00:10:18 crc kubenswrapper[5109]: I0217 00:10:18.466487 5109 scope.go:117] "RemoveContainer" containerID="2714abd11f9fcc5bf7a3130bf396a8407088c58ac080791afb94faa2aeb1d8ef" Feb 17 00:10:18 crc kubenswrapper[5109]: E0217 00:10:18.466872 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 17 00:10:18 crc kubenswrapper[5109]: E0217 00:10:18.470289 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"351c3bed-1bde-4016-be98-c82504203bf7\\\",\\\"systemUUID\\\":\\\"85fb0ff0-40b9-49c9-951f-8aba64a9d9fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:18 crc kubenswrapper[5109]: I0217 00:10:18.478265 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:18 crc kubenswrapper[5109]: I0217 00:10:18.478319 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:18 crc kubenswrapper[5109]: I0217 00:10:18.478332 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:18 crc kubenswrapper[5109]: I0217 00:10:18.478352 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:18 crc kubenswrapper[5109]: I0217 00:10:18.478366 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:18Z","lastTransitionTime":"2026-02-17T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:18 crc kubenswrapper[5109]: E0217 00:10:18.494564 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"351c3bed-1bde-4016-be98-c82504203bf7\\\",\\\"systemUUID\\\":\\\"85fb0ff0-40b9-49c9-951f-8aba64a9d9fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:18 crc kubenswrapper[5109]: I0217 00:10:18.499516 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:18 crc kubenswrapper[5109]: I0217 00:10:18.499570 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:18 crc kubenswrapper[5109]: I0217 00:10:18.499583 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:18 crc kubenswrapper[5109]: I0217 00:10:18.499622 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:18 crc kubenswrapper[5109]: I0217 00:10:18.499637 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:18Z","lastTransitionTime":"2026-02-17T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:18 crc kubenswrapper[5109]: E0217 00:10:18.514409 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"351c3bed-1bde-4016-be98-c82504203bf7\\\",\\\"systemUUID\\\":\\\"85fb0ff0-40b9-49c9-951f-8aba64a9d9fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:18 crc kubenswrapper[5109]: I0217 00:10:18.519534 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:18 crc kubenswrapper[5109]: I0217 00:10:18.519620 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:18 crc kubenswrapper[5109]: I0217 00:10:18.519642 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:18 crc kubenswrapper[5109]: I0217 00:10:18.519666 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:18 crc kubenswrapper[5109]: I0217 00:10:18.519686 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:18Z","lastTransitionTime":"2026-02-17T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:18 crc kubenswrapper[5109]: E0217 00:10:18.535146 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"351c3bed-1bde-4016-be98-c82504203bf7\\\",\\\"systemUUID\\\":\\\"85fb0ff0-40b9-49c9-951f-8aba64a9d9fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:18 crc kubenswrapper[5109]: E0217 00:10:18.535338 5109 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 17 00:10:18 crc kubenswrapper[5109]: E0217 00:10:18.535373 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:18 crc kubenswrapper[5109]: E0217 00:10:18.635449 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:18 crc kubenswrapper[5109]: E0217 00:10:18.735819 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:18 crc kubenswrapper[5109]: E0217 00:10:18.836110 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:18 crc kubenswrapper[5109]: E0217 00:10:18.936925 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:19 crc kubenswrapper[5109]: E0217 00:10:19.037249 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:19 crc kubenswrapper[5109]: E0217 00:10:19.137716 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:19 crc kubenswrapper[5109]: E0217 00:10:19.238690 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:19 crc kubenswrapper[5109]: E0217 00:10:19.339566 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:19 crc kubenswrapper[5109]: E0217 00:10:19.439707 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:19 crc kubenswrapper[5109]: E0217 00:10:19.540516 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:19 crc kubenswrapper[5109]: E0217 00:10:19.540569 5109 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 00:10:19 crc kubenswrapper[5109]: E0217 00:10:19.640915 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:19 crc kubenswrapper[5109]: E0217 00:10:19.741235 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:19 crc kubenswrapper[5109]: E0217 00:10:19.842300 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:19 crc kubenswrapper[5109]: E0217 00:10:19.943308 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:20 crc kubenswrapper[5109]: E0217 00:10:20.044089 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:20 crc kubenswrapper[5109]: E0217 00:10:20.145197 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:20 crc kubenswrapper[5109]: E0217 00:10:20.246540 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:20 crc kubenswrapper[5109]: E0217 00:10:20.347777 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:20 crc kubenswrapper[5109]: E0217 00:10:20.448837 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:20 crc kubenswrapper[5109]: E0217 00:10:20.549029 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:20 crc kubenswrapper[5109]: E0217 00:10:20.650518 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:20 crc kubenswrapper[5109]: E0217 00:10:20.750916 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:20 crc kubenswrapper[5109]: E0217 00:10:20.851782 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:20 crc kubenswrapper[5109]: E0217 00:10:20.952074 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:21 crc kubenswrapper[5109]: E0217 00:10:21.053189 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:21 crc kubenswrapper[5109]: E0217 00:10:21.154210 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:21 crc kubenswrapper[5109]: E0217 00:10:21.254622 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:21 crc kubenswrapper[5109]: E0217 00:10:21.355084 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:21 crc kubenswrapper[5109]: E0217 00:10:21.455806 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:21 crc kubenswrapper[5109]: E0217 00:10:21.556166 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:21 crc kubenswrapper[5109]: E0217 00:10:21.656633 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:21 crc kubenswrapper[5109]: E0217 00:10:21.757043 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:21 crc kubenswrapper[5109]: E0217 00:10:21.858071 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:21 crc kubenswrapper[5109]: E0217 00:10:21.959344 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:22 crc kubenswrapper[5109]: E0217 00:10:22.060610 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:22 crc kubenswrapper[5109]: E0217 00:10:22.161781 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:22 crc kubenswrapper[5109]: E0217 00:10:22.262933 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:22 crc kubenswrapper[5109]: E0217 00:10:22.363573 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:22 crc kubenswrapper[5109]: E0217 00:10:22.464077 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:22 crc kubenswrapper[5109]: E0217 00:10:22.565363 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:22 crc kubenswrapper[5109]: E0217 00:10:22.666145 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:22 crc kubenswrapper[5109]: E0217 00:10:22.766655 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:22 crc kubenswrapper[5109]: E0217 00:10:22.867342 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:22 crc kubenswrapper[5109]: E0217 00:10:22.967836 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:23 crc kubenswrapper[5109]: E0217 00:10:23.069010 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:23 crc kubenswrapper[5109]: E0217 00:10:23.170067 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:23 crc kubenswrapper[5109]: E0217 00:10:23.270284 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:23 crc kubenswrapper[5109]: E0217 00:10:23.371412 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:23 crc kubenswrapper[5109]: E0217 00:10:23.472114 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:23 crc kubenswrapper[5109]: E0217 00:10:23.572370 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:23 crc kubenswrapper[5109]: E0217 00:10:23.673489 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:23 crc kubenswrapper[5109]: E0217 00:10:23.773651 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:23 crc kubenswrapper[5109]: E0217 00:10:23.874710 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:23 crc kubenswrapper[5109]: E0217 00:10:23.975580 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:24 crc kubenswrapper[5109]: E0217 00:10:24.075931 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:24 crc kubenswrapper[5109]: E0217 00:10:24.176357 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:24 crc kubenswrapper[5109]: E0217 00:10:24.279423 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:24 crc kubenswrapper[5109]: E0217 00:10:24.379542 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:24 crc kubenswrapper[5109]: E0217 00:10:24.480306 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:24 crc kubenswrapper[5109]: E0217 00:10:24.580688 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:24 crc kubenswrapper[5109]: E0217 00:10:24.681167 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:24 crc kubenswrapper[5109]: E0217 00:10:24.781473 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:24 crc kubenswrapper[5109]: E0217 00:10:24.881790 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:24 crc kubenswrapper[5109]: E0217 00:10:24.982809 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:25 crc kubenswrapper[5109]: E0217 00:10:25.084209 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:25 crc kubenswrapper[5109]: E0217 00:10:25.185483 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:25 crc kubenswrapper[5109]: E0217 00:10:25.286450 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:25 crc kubenswrapper[5109]: E0217 00:10:25.386980 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:25 crc kubenswrapper[5109]: E0217 00:10:25.487944 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:25 crc kubenswrapper[5109]: E0217 00:10:25.588805 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:25 crc kubenswrapper[5109]: E0217 00:10:25.689135 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:25 crc kubenswrapper[5109]: E0217 00:10:25.789381 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:25 crc kubenswrapper[5109]: E0217 00:10:25.889972 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:25 crc kubenswrapper[5109]: E0217 00:10:25.991141 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:26 crc kubenswrapper[5109]: E0217 00:10:26.092217 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:26 crc kubenswrapper[5109]: E0217 00:10:26.193245 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:26 crc kubenswrapper[5109]: E0217 00:10:26.293716 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:26 crc kubenswrapper[5109]: E0217 00:10:26.394321 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:26 crc kubenswrapper[5109]: E0217 00:10:26.494751 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:26 crc kubenswrapper[5109]: E0217 00:10:26.595393 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:26 crc kubenswrapper[5109]: E0217 00:10:26.695761 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:26 crc kubenswrapper[5109]: E0217 00:10:26.796947 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:26 crc kubenswrapper[5109]: E0217 00:10:26.897255 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:26 crc kubenswrapper[5109]: E0217 00:10:26.997693 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:27 crc kubenswrapper[5109]: E0217 00:10:27.098187 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:27 crc kubenswrapper[5109]: E0217 00:10:27.198289 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:27 crc kubenswrapper[5109]: E0217 00:10:27.298996 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:27 crc kubenswrapper[5109]: E0217 00:10:27.399131 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:27 crc kubenswrapper[5109]: E0217 00:10:27.499429 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:27 crc kubenswrapper[5109]: E0217 00:10:27.600371 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:27 crc kubenswrapper[5109]: E0217 00:10:27.701156 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:27 crc kubenswrapper[5109]: E0217 00:10:27.801472 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:27 crc kubenswrapper[5109]: E0217 00:10:27.901638 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:28 crc kubenswrapper[5109]: E0217 00:10:28.002417 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:28 crc kubenswrapper[5109]: E0217 00:10:28.103486 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:28 crc kubenswrapper[5109]: E0217 00:10:28.204422 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:28 crc kubenswrapper[5109]: E0217 00:10:28.305230 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:28 crc kubenswrapper[5109]: I0217 00:10:28.324161 5109 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Feb 17 00:10:28 crc kubenswrapper[5109]: E0217 00:10:28.405952 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:28 crc kubenswrapper[5109]: E0217 00:10:28.506677 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:28 crc kubenswrapper[5109]: E0217 00:10:28.606992 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:28 crc kubenswrapper[5109]: E0217 00:10:28.708118 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:28 crc kubenswrapper[5109]: E0217 00:10:28.729796 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 17 00:10:28 crc kubenswrapper[5109]: I0217 00:10:28.734263 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:28 crc kubenswrapper[5109]: I0217 00:10:28.734301 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:28 crc kubenswrapper[5109]: I0217 00:10:28.734310 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:28 crc kubenswrapper[5109]: I0217 00:10:28.734324 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:28 crc kubenswrapper[5109]: I0217 00:10:28.734333 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:28Z","lastTransitionTime":"2026-02-17T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:28 crc kubenswrapper[5109]: E0217 00:10:28.748025 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"351c3bed-1bde-4016-be98-c82504203bf7\\\",\\\"systemUUID\\\":\\\"85fb0ff0-40b9-49c9-951f-8aba64a9d9fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:28 crc kubenswrapper[5109]: I0217 00:10:28.752108 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:28 crc kubenswrapper[5109]: I0217 00:10:28.752157 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:28 crc kubenswrapper[5109]: I0217 00:10:28.752176 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:28 crc kubenswrapper[5109]: I0217 00:10:28.752201 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:28 crc kubenswrapper[5109]: I0217 00:10:28.752217 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:28Z","lastTransitionTime":"2026-02-17T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:28 crc kubenswrapper[5109]: E0217 00:10:28.769281 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"351c3bed-1bde-4016-be98-c82504203bf7\\\",\\\"systemUUID\\\":\\\"85fb0ff0-40b9-49c9-951f-8aba64a9d9fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:28 crc kubenswrapper[5109]: I0217 00:10:28.774240 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:28 crc kubenswrapper[5109]: I0217 00:10:28.774294 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:28 crc kubenswrapper[5109]: I0217 00:10:28.774313 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:28 crc kubenswrapper[5109]: I0217 00:10:28.774336 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:28 crc kubenswrapper[5109]: I0217 00:10:28.774390 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:28Z","lastTransitionTime":"2026-02-17T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:28 crc kubenswrapper[5109]: E0217 00:10:28.789241 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"351c3bed-1bde-4016-be98-c82504203bf7\\\",\\\"systemUUID\\\":\\\"85fb0ff0-40b9-49c9-951f-8aba64a9d9fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:28 crc kubenswrapper[5109]: I0217 00:10:28.793988 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:28 crc kubenswrapper[5109]: I0217 00:10:28.794026 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:28 crc kubenswrapper[5109]: I0217 00:10:28.794038 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:28 crc kubenswrapper[5109]: I0217 00:10:28.794056 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:28 crc kubenswrapper[5109]: I0217 00:10:28.794068 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:28Z","lastTransitionTime":"2026-02-17T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:28 crc kubenswrapper[5109]: E0217 00:10:28.810095 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"351c3bed-1bde-4016-be98-c82504203bf7\\\",\\\"systemUUID\\\":\\\"85fb0ff0-40b9-49c9-951f-8aba64a9d9fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:28 crc kubenswrapper[5109]: E0217 00:10:28.810333 5109 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 17 00:10:28 crc kubenswrapper[5109]: E0217 00:10:28.810371 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:28 crc kubenswrapper[5109]: E0217 00:10:28.910491 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:29 crc kubenswrapper[5109]: E0217 00:10:29.010959 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:29 crc kubenswrapper[5109]: E0217 00:10:29.111833 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:29 crc kubenswrapper[5109]: E0217 00:10:29.212640 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:29 crc kubenswrapper[5109]: E0217 00:10:29.312818 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:29 crc kubenswrapper[5109]: E0217 00:10:29.413424 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:29 crc kubenswrapper[5109]: E0217 00:10:29.514257 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:29 crc kubenswrapper[5109]: E0217 00:10:29.541846 5109 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 17 00:10:29 crc kubenswrapper[5109]: E0217 00:10:29.614690 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:29 crc kubenswrapper[5109]: I0217 00:10:29.664061 5109 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Feb 17 00:10:29 crc kubenswrapper[5109]: E0217 00:10:29.714782 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:29 crc kubenswrapper[5109]: E0217 00:10:29.815430 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:29 crc kubenswrapper[5109]: E0217 00:10:29.916585 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:30 crc kubenswrapper[5109]: E0217 00:10:30.017198 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:30 crc kubenswrapper[5109]: E0217 00:10:30.118150 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.127290 5109 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.200152 5109 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.214782 5109 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.220896 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.221359 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.221483 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.221614 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.221760 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:30Z","lastTransitionTime":"2026-02-17T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.222772 5109 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.325417 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.325485 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.325503 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.325529 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.325547 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:30Z","lastTransitionTime":"2026-02-17T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.325795 5109 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.402532 5109 apiserver.go:52] "Watching apiserver" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.411548 5109 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.412297 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-t9gkm","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-ovn-kubernetes/ovnkube-node-5wnz5","openshift-dns/node-resolver-lscz2","openshift-multus/multus-bbh4j","openshift-network-node-identity/network-node-identity-dgvkt","openshift-image-registry/node-ca-lxqdh","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-network-diagnostics/network-check-target-fhkjl","openshift-network-operator/iptables-alerter-5jnd7","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d9cml","openshift-kube-apiserver/kube-apiserver-crc","openshift-machine-config-operator/machine-config-daemon-hjvm4","openshift-multus/multus-additional-cni-plugins-9hkt8","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv"] Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.413920 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.414887 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 17 00:10:30 crc kubenswrapper[5109]: E0217 00:10:30.415175 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.416049 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 17 00:10:30 crc kubenswrapper[5109]: E0217 00:10:30.416361 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.417075 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.417302 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.417726 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.418009 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.418936 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.420947 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.421471 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.426913 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.425848 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.426396 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.427886 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.429303 5109 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.435043 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.435129 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.435155 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.435160 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.435188 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.435215 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:30Z","lastTransitionTime":"2026-02-17T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:30 crc kubenswrapper[5109]: E0217 00:10:30.435301 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.440154 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.442674 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.444788 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.444821 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.445202 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.445026 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.445245 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.444907 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.445692 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lxqdh" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.447778 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.448402 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.448809 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.449455 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.454425 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.455980 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.457501 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.457632 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.458122 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.458908 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.459458 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.460521 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-lscz2" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.463260 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.464380 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.465024 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.465424 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-9hkt8" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.468535 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.468726 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.469867 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.471502 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.472167 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.472327 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t9gkm" Feb 17 00:10:30 crc kubenswrapper[5109]: E0217 00:10:30.472520 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t9gkm" podUID="1d9259cd-7490-4a4f-b09c-db6d25fadf0e" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.476246 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.476323 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.476512 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.476624 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.477068 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.477158 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.477336 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d9cml" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.478504 5109 scope.go:117] "RemoveContainer" containerID="2714abd11f9fcc5bf7a3130bf396a8407088c58ac080791afb94faa2aeb1d8ef" Feb 17 00:10:30 crc kubenswrapper[5109]: E0217 00:10:30.478754 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.481541 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.481685 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.492779 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900bd7e9-9e0a-4472-9882-1a0b3e829007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6jfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6jfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6jfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6jfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6jfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6jfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6jfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6jfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6jfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5wnz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.501131 5109 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.508346 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.520401 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxqdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68075fa3-1f79-4a7c-ba56-8a8cfd0a9be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsmcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:10:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxqdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.527152 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.534273 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.534560 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.534769 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.534954 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.535306 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-bbh4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1a466bd-accd-4381-b1f0-357d6e20410e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7hjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bbh4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.535668 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.535443 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.536372 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.536544 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.536739 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.536987 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.537213 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.537254 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.537267 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.536759 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.537151 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.537287 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.537533 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:30Z","lastTransitionTime":"2026-02-17T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.537829 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.537979 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.538124 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.538255 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.538563 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.539890 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.540086 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.537894 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.538152 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.538323 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.538474 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.538869 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.539032 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.539498 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.539773 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.540258 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.540444 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.541303 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.541796 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.542050 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.542200 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.542342 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.542499 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.542698 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.543325 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.543838 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.543414 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.544343 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.544507 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.544688 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.544846 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.544993 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.545151 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.545288 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.545427 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.545730 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.544611 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.545263 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.545504 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.545939 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.546129 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.545882 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.546372 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.546503 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.546669 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.546836 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.546991 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.547139 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.546608 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.546612 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.547030 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.547136 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.547465 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.547478 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.547254 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.547315 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.547586 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.547749 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.547749 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.547805 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.547829 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.547964 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.547988 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.548012 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.548035 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.548058 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.548083 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.548105 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.548128 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.548154 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.548153 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.548177 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.548199 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.548221 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.548248 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.548287 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.548516 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.548680 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.548691 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.548727 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.548750 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.548772 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.548797 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.548821 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.548821 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.548845 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.548868 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.548949 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.549011 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.549334 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.549359 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.549447 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.549604 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.549870 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.547865 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.550090 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.550322 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.550373 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.550425 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.550495 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.550525 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.550547 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.550569 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.550605 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.550628 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.550648 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.550669 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.550689 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.550690 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.550712 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.550796 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.550822 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.550843 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.550867 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.550889 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.550907 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.550927 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.550943 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.550960 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.550981 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.550998 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.551017 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.551035 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.551054 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.551070 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.551086 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.551101 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.551120 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.551135 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.551184 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.551203 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.551272 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.551293 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.551315 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.551334 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.551356 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.551376 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.551396 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.551416 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.550955 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: E0217 00:10:30.551442 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:31.051413962 +0000 UTC m=+102.382968930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.551615 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.551858 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.552366 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.552425 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.552636 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.552655 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.552708 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.553039 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.553318 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.553791 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.553831 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.555192 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.554048 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.554090 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.554110 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.554182 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.554297 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.555310 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.555359 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.555396 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.555422 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.555447 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.555471 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.555494 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.555559 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.555588 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.555656 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.555685 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.555715 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.555740 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.555767 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.555793 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.555819 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.555843 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.555866 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.555891 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.555915 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.555936 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.555959 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.555981 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.556005 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.556030 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.556074 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.556100 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.556127 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.556153 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.556177 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.556201 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.556225 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.556252 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.556276 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.556304 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.556328 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.556351 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.556373 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.556398 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.554473 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.554483 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.554650 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.554828 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.554955 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.555777 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.555820 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.556040 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.556062 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.556283 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.556313 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.556400 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.556422 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.556746 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.556751 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.556947 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.557378 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.557438 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.557515 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.557800 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.558308 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.558807 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.558807 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.559019 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.559080 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.559660 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.561107 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.561557 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.561671 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.561766 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.562111 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.562340 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.562470 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.562874 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.562944 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.562951 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.562982 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.563040 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.563118 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.563245 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.563280 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.563315 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.563394 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.563467 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.563812 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.563897 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.563943 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.563983 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.563981 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.564024 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.564060 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.564092 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.564130 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.564162 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.564193 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.564229 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.564258 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.564290 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.564319 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.564352 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.564384 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.564414 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.564449 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.564485 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.564520 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.564552 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.564583 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.564644 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.564681 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.564718 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.564752 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.564881 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.564887 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.564917 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.565662 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.565756 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.565836 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.565912 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.566019 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.566117 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.564581 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.564679 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.564716 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.565056 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.565104 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.565212 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.565514 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.565984 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.566010 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.566154 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.566202 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.566624 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.566230 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.567016 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.567048 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.567077 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.567109 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.567157 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.567182 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.567208 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.567371 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.567406 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.567439 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.567472 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.567510 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.567538 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.567566 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.567612 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.567638 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.567663 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.567687 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.567710 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.567732 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.567989 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.568019 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.568047 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.568072 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.568096 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.568120 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.568147 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.568171 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.568193 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.568218 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.568241 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.568265 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.568288 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.568561 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.568647 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.568821 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.569107 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.569182 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.569133 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.569177 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.569296 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.569381 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.569498 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.569580 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.569635 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.569655 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.569719 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.570133 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.570256 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.570313 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.570390 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.570429 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.570455 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.570501 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.570562 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.570656 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.570683 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.570851 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.570957 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.571100 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.571220 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.571650 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.571826 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.571830 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.571899 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.572167 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.572845 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.573259 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.573341 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.573366 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.573764 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.573848 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.573960 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.574190 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.574325 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.574635 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.574677 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.574675 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.574724 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.574825 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.574824 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.576033 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.575154 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.575379 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.575520 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.575753 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.575869 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.576279 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.576347 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.576377 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.576381 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.576414 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.576683 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.576723 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.576731 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.576765 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.576961 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.577136 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.577195 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.577330 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.577315 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.577488 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.577443 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.577530 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.577547 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.577748 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.577869 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-cni-bin\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.577802 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.578078 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.577794 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.577894 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.577895 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.578111 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-d9cml\" (UID: \"aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d9cml" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.578169 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-host-run-k8s-cni-cncf-io\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.578186 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-host-run-multus-certs\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.578258 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.578263 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-os-release\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.578295 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-run-netns\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.578320 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-node-log\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.578343 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.578368 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7hjg\" (UniqueName: \"kubernetes.io/projected/a1a466bd-accd-4381-b1f0-357d6e20410e-kube-api-access-q7hjg\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.578399 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.578405 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.578465 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-log-socket\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.578500 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.578620 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5867f26a-eddd-4d0b-bfa3-e7c68e976330-proxy-tls\") pod \"machine-config-daemon-hjvm4\" (UID: \"5867f26a-eddd-4d0b-bfa3-e7c68e976330\") " pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.578655 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.578678 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a1a466bd-accd-4381-b1f0-357d6e20410e-cni-binary-copy\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.578694 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-run-ovn-kubernetes\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.578694 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.578745 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.578780 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b5536526-72ad-4b54-98bf-7b293cbf26ab-tmp-dir\") pod \"node-resolver-lscz2\" (UID: \"b5536526-72ad-4b54-98bf-7b293cbf26ab\") " pod="openshift-dns/node-resolver-lscz2" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.578810 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.578879 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crnc2\" (UniqueName: \"kubernetes.io/projected/8cb0bf5c-93dc-47f6-9e86-3071c8865bbb-kube-api-access-crnc2\") pod \"multus-additional-cni-plugins-9hkt8\" (UID: \"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb\") " pod="openshift-multus/multus-additional-cni-plugins-9hkt8" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.578922 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.578984 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-multus-socket-dir-parent\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.579017 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.579033 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8cb0bf5c-93dc-47f6-9e86-3071c8865bbb-system-cni-dir\") pod \"multus-additional-cni-plugins-9hkt8\" (UID: \"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb\") " pod="openshift-multus/multus-additional-cni-plugins-9hkt8" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.579188 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-cni-netd\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.579227 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/900bd7e9-9e0a-4472-9882-1a0b3e829007-ovnkube-config\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.579259 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/900bd7e9-9e0a-4472-9882-1a0b3e829007-env-overrides\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.579286 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6jfd\" (UniqueName: \"kubernetes.io/projected/900bd7e9-9e0a-4472-9882-1a0b3e829007-kube-api-access-b6jfd\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.579314 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.579339 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8cb0bf5c-93dc-47f6-9e86-3071c8865bbb-os-release\") pod \"multus-additional-cni-plugins-9hkt8\" (UID: \"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb\") " pod="openshift-multus/multus-additional-cni-plugins-9hkt8" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.579443 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrjnj\" (UniqueName: \"kubernetes.io/projected/1d9259cd-7490-4a4f-b09c-db6d25fadf0e-kube-api-access-wrjnj\") pod \"network-metrics-daemon-t9gkm\" (UID: \"1d9259cd-7490-4a4f-b09c-db6d25fadf0e\") " pod="openshift-multus/network-metrics-daemon-t9gkm" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.579478 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-multus-cni-dir\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.579501 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-run-ovn\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.579528 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5lnz\" (UniqueName: \"kubernetes.io/projected/b5536526-72ad-4b54-98bf-7b293cbf26ab-kube-api-access-n5lnz\") pod \"node-resolver-lscz2\" (UID: \"b5536526-72ad-4b54-98bf-7b293cbf26ab\") " pod="openshift-dns/node-resolver-lscz2" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.579552 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.579730 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8cb0bf5c-93dc-47f6-9e86-3071c8865bbb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9hkt8\" (UID: \"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb\") " pod="openshift-multus/multus-additional-cni-plugins-9hkt8" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.579760 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8cb0bf5c-93dc-47f6-9e86-3071c8865bbb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9hkt8\" (UID: \"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb\") " pod="openshift-multus/multus-additional-cni-plugins-9hkt8" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.579784 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-host-var-lib-cni-bin\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.579806 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a1a466bd-accd-4381-b1f0-357d6e20410e-multus-daemon-config\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.579826 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-etc-kubernetes\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.579852 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.579877 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-kubelet\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.579925 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-systemd-units\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.579980 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/900bd7e9-9e0a-4472-9882-1a0b3e829007-ovnkube-script-lib\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.580029 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 17 00:10:30 crc kubenswrapper[5109]: E0217 00:10:30.580050 5109 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.580068 5109 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 17 00:10:30 crc kubenswrapper[5109]: E0217 00:10:30.580133 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-17 00:10:31.080111946 +0000 UTC m=+102.411666704 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.580079 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.580415 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/68075fa3-1f79-4a7c-ba56-8a8cfd0a9be5-host\") pod \"node-ca-lxqdh\" (UID: \"68075fa3-1f79-4a7c-ba56-8a8cfd0a9be5\") " pod="openshift-image-registry/node-ca-lxqdh" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.580442 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-host-var-lib-kubelet\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.580465 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-var-lib-openvswitch\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.580492 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.580513 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d9259cd-7490-4a4f-b09c-db6d25fadf0e-metrics-certs\") pod \"network-metrics-daemon-t9gkm\" (UID: \"1d9259cd-7490-4a4f-b09c-db6d25fadf0e\") " pod="openshift-multus/network-metrics-daemon-t9gkm" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.580541 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-cnibin\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.580570 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-hostroot\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.580612 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.580637 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-slash\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.580656 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-etc-openvswitch\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.580677 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-host-run-netns\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.580701 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-host-var-lib-cni-multus\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.580722 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-run-openvswitch\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.580740 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/68075fa3-1f79-4a7c-ba56-8a8cfd0a9be5-serviceca\") pod \"node-ca-lxqdh\" (UID: \"68075fa3-1f79-4a7c-ba56-8a8cfd0a9be5\") " pod="openshift-image-registry/node-ca-lxqdh" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.580761 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-d9cml\" (UID: \"aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d9cml" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.580787 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m8bp\" (UniqueName: \"kubernetes.io/projected/aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df-kube-api-access-7m8bp\") pod \"ovnkube-control-plane-57b78d8988-d9cml\" (UID: \"aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d9cml" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.580813 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.580841 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-multus-conf-dir\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.580861 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsmcd\" (UniqueName: \"kubernetes.io/projected/68075fa3-1f79-4a7c-ba56-8a8cfd0a9be5-kube-api-access-dsmcd\") pod \"node-ca-lxqdh\" (UID: \"68075fa3-1f79-4a7c-ba56-8a8cfd0a9be5\") " pod="openshift-image-registry/node-ca-lxqdh" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.580880 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-d9cml\" (UID: \"aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d9cml" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.580902 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b5536526-72ad-4b54-98bf-7b293cbf26ab-hosts-file\") pod \"node-resolver-lscz2\" (UID: \"b5536526-72ad-4b54-98bf-7b293cbf26ab\") " pod="openshift-dns/node-resolver-lscz2" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.580924 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5867f26a-eddd-4d0b-bfa3-e7c68e976330-mcd-auth-proxy-config\") pod \"machine-config-daemon-hjvm4\" (UID: \"5867f26a-eddd-4d0b-bfa3-e7c68e976330\") " pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.580949 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-system-cni-dir\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.580970 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5867f26a-eddd-4d0b-bfa3-e7c68e976330-rootfs\") pod \"machine-config-daemon-hjvm4\" (UID: \"5867f26a-eddd-4d0b-bfa3-e7c68e976330\") " pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.580992 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn449\" (UniqueName: \"kubernetes.io/projected/5867f26a-eddd-4d0b-bfa3-e7c68e976330-kube-api-access-fn449\") pod \"machine-config-daemon-hjvm4\" (UID: \"5867f26a-eddd-4d0b-bfa3-e7c68e976330\") " pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581012 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8cb0bf5c-93dc-47f6-9e86-3071c8865bbb-cni-binary-copy\") pod \"multus-additional-cni-plugins-9hkt8\" (UID: \"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb\") " pod="openshift-multus/multus-additional-cni-plugins-9hkt8" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581030 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/8cb0bf5c-93dc-47f6-9e86-3071c8865bbb-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-9hkt8\" (UID: \"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb\") " pod="openshift-multus/multus-additional-cni-plugins-9hkt8" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581063 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581082 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-run-systemd\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581101 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/900bd7e9-9e0a-4472-9882-1a0b3e829007-ovn-node-metrics-cert\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581120 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8cb0bf5c-93dc-47f6-9e86-3071c8865bbb-cnibin\") pod \"multus-additional-cni-plugins-9hkt8\" (UID: \"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb\") " pod="openshift-multus/multus-additional-cni-plugins-9hkt8" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581249 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581265 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581277 5109 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581290 5109 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581304 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581317 5109 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581330 5109 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581344 5109 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581359 5109 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: E0217 00:10:30.581365 5109 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581372 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581387 5109 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581402 5109 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: E0217 00:10:30.581426 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-17 00:10:31.08140849 +0000 UTC m=+102.412963248 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581303 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581613 5109 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581589 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581666 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581685 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581703 5109 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581716 5109 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581729 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581741 5109 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581755 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581767 5109 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581779 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581791 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581804 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581818 5109 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581830 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581842 5109 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581853 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581866 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581878 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581890 5109 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581904 5109 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581917 5109 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581928 5109 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581939 5109 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581952 5109 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581964 5109 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581975 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581987 5109 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.581999 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582011 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582022 5109 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582033 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582047 5109 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582061 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582074 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582085 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582096 5109 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582107 5109 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582119 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582132 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582142 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582155 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582166 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582177 5109 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582188 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582200 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582211 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582222 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582233 5109 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582244 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582256 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582268 5109 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582280 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582291 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582304 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582316 5109 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582327 5109 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582338 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582349 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582360 5109 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582373 5109 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582384 5109 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582396 5109 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582407 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582417 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582428 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582440 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582455 5109 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582467 5109 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582479 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582491 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582502 5109 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582514 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582525 5109 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582537 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582550 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582564 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582575 5109 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582586 5109 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582619 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582632 5109 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582643 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582654 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582665 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582677 5109 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582688 5109 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582701 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582712 5109 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582723 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582735 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582747 5109 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582759 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582769 5109 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582780 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582792 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582792 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582804 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582818 5109 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582833 5109 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582844 5109 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582856 5109 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582864 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582869 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582894 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582906 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582918 5109 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582927 5109 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582937 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582947 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582958 5109 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582968 5109 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582978 5109 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582988 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.582999 5109 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583013 5109 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583027 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583039 5109 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583050 5109 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583061 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583071 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583081 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583112 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583122 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583130 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583140 5109 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583149 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583160 5109 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583170 5109 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583179 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583188 5109 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583197 5109 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583207 5109 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583216 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583229 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583335 5109 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583346 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583362 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583372 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583361 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583382 5109 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583418 5109 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583376 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583418 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583430 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583440 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583434 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583543 5109 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583557 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583571 5109 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583583 5109 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583612 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.583753 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.584200 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.584313 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.584694 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.584714 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.584777 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.585301 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.585579 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.585904 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.586868 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587005 5109 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587022 5109 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587034 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587068 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587122 5109 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587140 5109 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587153 5109 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587187 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587199 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587210 5109 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587222 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587258 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587270 5109 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587281 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587310 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587322 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587334 5109 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587350 5109 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587400 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587411 5109 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587424 5109 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587453 5109 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587486 5109 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587496 5109 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587508 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587542 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587553 5109 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587564 5109 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587575 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587714 5109 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587730 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587762 5109 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587813 5109 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587825 5109 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587838 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587849 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587880 5109 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.587931 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.588722 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.600466 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.600528 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.600783 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: E0217 00:10:30.601149 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 00:10:30 crc kubenswrapper[5109]: E0217 00:10:30.601174 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 00:10:30 crc kubenswrapper[5109]: E0217 00:10:30.601188 5109 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:10:30 crc kubenswrapper[5109]: E0217 00:10:30.601267 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-17 00:10:31.101244361 +0000 UTC m=+102.432799139 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.601440 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.602204 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.602209 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.604336 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 17 00:10:30 crc kubenswrapper[5109]: E0217 00:10:30.604820 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 00:10:30 crc kubenswrapper[5109]: E0217 00:10:30.604952 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 00:10:30 crc kubenswrapper[5109]: E0217 00:10:30.605054 5109 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:10:30 crc kubenswrapper[5109]: E0217 00:10:30.605196 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-17 00:10:31.105171505 +0000 UTC m=+102.436726503 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.606405 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.606971 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.608329 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.614184 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.615850 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.615670 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.617877 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-lscz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5536526-72ad-4b54-98bf-7b293cbf26ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5lnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:10:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-lscz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.619187 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.619668 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.626544 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t9gkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9259cd-7490-4a4f-b09c-db6d25fadf0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrjnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t9gkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.631136 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.643231 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.643279 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.643294 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.643314 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.643328 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:30Z","lastTransitionTime":"2026-02-17T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.643881 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.646851 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f24d6129-d445-44fb-9650-c41682f49961\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ac2793428d71aba7d6d42ce84de49139be1ce4d8ef3f17f38d753a042d9b7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:08:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9240a3bec4000d7ca27bea3887268965add2d848b7ce30fe954758922063461e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:08:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a1ed21950236bfcef3e4e43b1b1b51807c2bf962afafe52fea0c329bb95ec8a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:08:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d4b1dec918be4294e36e17ec1d41fe25afc4d62900b5d238a1765dff29cb3479\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:08:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://adca90b4cbebf962d60acc3a5facc178862fbd9dc66075a6ddb72c746ecf6c72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:08:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a48ddaae74ae155d492a43794ecc243ec32762eb7a934225e66d824ef33860b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a48ddaae74ae155d492a43794ecc243ec32762eb7a934225e66d824ef33860b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:08:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:08:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://bb9fc62050994fe5e33821e86436a027f0bdc0001cfb5d2911c514e6ebea56f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb9fc62050994fe5e33821e86436a027f0bdc0001cfb5d2911c514e6ebea56f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:08:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:08:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cdb84989ffbaeb987f1b666d66905dcd1e8c997c1ffaf47e890b744793ff4fc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb84989ffbaeb987f1b666d66905dcd1e8c997c1ffaf47e890b744793ff4fc5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:08:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:08:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:08:49Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.650887 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.651430 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.658375 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7fd939fe-0a48-4e64-9af1-2979c8b7ff4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9276ef83ae446d7739ce65ad4e09a455b6208b9c7a53629a957811a2911843f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:08:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://6724cb7115224b3beafe0f51aabd64e0d52a6101b64fe1dd025f1b91232bc384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:08:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d6dd4d88205d07bc393ac21615ad8ae1693766d7476589071215e1093a4d832e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:08:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a2699fb5c0bc449f918758c90f560033cfa82671cbb60f1d79c7a1dda888f194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:08:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:08:49Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.666174 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c1f03dd-cfa3-495a-aa99-f263cb05e8e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3305e1572aeb5114f3fc14234cbfd9910730408fc735ebc9138bae8714cb54ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:08:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a94f5092ece221bd275685b40e310ce0ef6a4928f5127934f7287e842dd65a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a94f5092ece221bd275685b40e310ce0ef6a4928f5127934f7287e842dd65a6d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:08:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:08:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:08:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.678052 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.686251 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxqdh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68075fa3-1f79-4a7c-ba56-8a8cfd0a9be5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsmcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:10:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxqdh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.688671 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-os-release\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.688715 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-run-netns\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.688742 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-node-log\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.688764 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.688787 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q7hjg\" (UniqueName: \"kubernetes.io/projected/a1a466bd-accd-4381-b1f0-357d6e20410e-kube-api-access-q7hjg\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.688813 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-log-socket\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.688835 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5867f26a-eddd-4d0b-bfa3-e7c68e976330-proxy-tls\") pod \"machine-config-daemon-hjvm4\" (UID: \"5867f26a-eddd-4d0b-bfa3-e7c68e976330\") " pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.688858 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a1a466bd-accd-4381-b1f0-357d6e20410e-cni-binary-copy\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.688881 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-run-ovn-kubernetes\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.688906 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.688931 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b5536526-72ad-4b54-98bf-7b293cbf26ab-tmp-dir\") pod \"node-resolver-lscz2\" (UID: \"b5536526-72ad-4b54-98bf-7b293cbf26ab\") " pod="openshift-dns/node-resolver-lscz2" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.688958 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-crnc2\" (UniqueName: \"kubernetes.io/projected/8cb0bf5c-93dc-47f6-9e86-3071c8865bbb-kube-api-access-crnc2\") pod \"multus-additional-cni-plugins-9hkt8\" (UID: \"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb\") " pod="openshift-multus/multus-additional-cni-plugins-9hkt8" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.688982 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-multus-socket-dir-parent\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689006 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8cb0bf5c-93dc-47f6-9e86-3071c8865bbb-system-cni-dir\") pod \"multus-additional-cni-plugins-9hkt8\" (UID: \"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb\") " pod="openshift-multus/multus-additional-cni-plugins-9hkt8" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689028 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-cni-netd\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689054 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/900bd7e9-9e0a-4472-9882-1a0b3e829007-ovnkube-config\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689076 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/900bd7e9-9e0a-4472-9882-1a0b3e829007-env-overrides\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689100 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b6jfd\" (UniqueName: \"kubernetes.io/projected/900bd7e9-9e0a-4472-9882-1a0b3e829007-kube-api-access-b6jfd\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689124 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8cb0bf5c-93dc-47f6-9e86-3071c8865bbb-os-release\") pod \"multus-additional-cni-plugins-9hkt8\" (UID: \"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb\") " pod="openshift-multus/multus-additional-cni-plugins-9hkt8" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689148 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wrjnj\" (UniqueName: \"kubernetes.io/projected/1d9259cd-7490-4a4f-b09c-db6d25fadf0e-kube-api-access-wrjnj\") pod \"network-metrics-daemon-t9gkm\" (UID: \"1d9259cd-7490-4a4f-b09c-db6d25fadf0e\") " pod="openshift-multus/network-metrics-daemon-t9gkm" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689172 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-multus-cni-dir\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689194 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-run-ovn\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689216 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n5lnz\" (UniqueName: \"kubernetes.io/projected/b5536526-72ad-4b54-98bf-7b293cbf26ab-kube-api-access-n5lnz\") pod \"node-resolver-lscz2\" (UID: \"b5536526-72ad-4b54-98bf-7b293cbf26ab\") " pod="openshift-dns/node-resolver-lscz2" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689238 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689263 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8cb0bf5c-93dc-47f6-9e86-3071c8865bbb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9hkt8\" (UID: \"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb\") " pod="openshift-multus/multus-additional-cni-plugins-9hkt8" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689286 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8cb0bf5c-93dc-47f6-9e86-3071c8865bbb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9hkt8\" (UID: \"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb\") " pod="openshift-multus/multus-additional-cni-plugins-9hkt8" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689311 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-host-var-lib-cni-bin\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689333 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a1a466bd-accd-4381-b1f0-357d6e20410e-multus-daemon-config\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689355 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-etc-kubernetes\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689398 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-kubelet\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689422 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-systemd-units\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689448 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/900bd7e9-9e0a-4472-9882-1a0b3e829007-ovnkube-script-lib\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689482 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/68075fa3-1f79-4a7c-ba56-8a8cfd0a9be5-host\") pod \"node-ca-lxqdh\" (UID: \"68075fa3-1f79-4a7c-ba56-8a8cfd0a9be5\") " pod="openshift-image-registry/node-ca-lxqdh" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689505 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-host-var-lib-kubelet\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689521 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-cni-netd\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689608 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-var-lib-openvswitch\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689543 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-var-lib-openvswitch\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689662 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d9259cd-7490-4a4f-b09c-db6d25fadf0e-metrics-certs\") pod \"network-metrics-daemon-t9gkm\" (UID: \"1d9259cd-7490-4a4f-b09c-db6d25fadf0e\") " pod="openshift-multus/network-metrics-daemon-t9gkm" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689691 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-cnibin\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689714 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-hostroot\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689737 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-os-release\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689747 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-slash\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689776 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-etc-openvswitch\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689792 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-run-netns\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689802 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-host-run-netns\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689824 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-node-log\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689831 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-host-var-lib-cni-multus\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689876 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689907 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-host-var-lib-cni-multus\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.689912 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8cb0bf5c-93dc-47f6-9e86-3071c8865bbb-os-release\") pod \"multus-additional-cni-plugins-9hkt8\" (UID: \"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb\") " pod="openshift-multus/multus-additional-cni-plugins-9hkt8" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.690040 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-run-openvswitch\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.690047 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.690078 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/68075fa3-1f79-4a7c-ba56-8a8cfd0a9be5-serviceca\") pod \"node-ca-lxqdh\" (UID: \"68075fa3-1f79-4a7c-ba56-8a8cfd0a9be5\") " pod="openshift-image-registry/node-ca-lxqdh" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.690119 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-host-run-netns\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.690122 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-kubelet\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.690151 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8cb0bf5c-93dc-47f6-9e86-3071c8865bbb-system-cni-dir\") pod \"multus-additional-cni-plugins-9hkt8\" (UID: \"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb\") " pod="openshift-multus/multus-additional-cni-plugins-9hkt8" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.690207 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-hostroot\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.690287 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-multus-socket-dir-parent\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.690331 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/68075fa3-1f79-4a7c-ba56-8a8cfd0a9be5-host\") pod \"node-ca-lxqdh\" (UID: \"68075fa3-1f79-4a7c-ba56-8a8cfd0a9be5\") " pod="openshift-image-registry/node-ca-lxqdh" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.690353 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-run-ovn-kubernetes\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.690355 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-slash\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.690387 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-d9cml\" (UID: \"aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d9cml" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.691345 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-run-ovn\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.690535 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/900bd7e9-9e0a-4472-9882-1a0b3e829007-ovnkube-config\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.690567 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-log-socket\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.690586 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-systemd-units\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.690613 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-host-var-lib-cni-bin\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.691071 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b5536526-72ad-4b54-98bf-7b293cbf26ab-tmp-dir\") pod \"node-resolver-lscz2\" (UID: \"b5536526-72ad-4b54-98bf-7b293cbf26ab\") " pod="openshift-dns/node-resolver-lscz2" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.691113 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-cnibin\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.691135 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-host-var-lib-kubelet\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: E0217 00:10:30.691191 5109 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 00:10:30 crc kubenswrapper[5109]: E0217 00:10:30.691469 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d9259cd-7490-4a4f-b09c-db6d25fadf0e-metrics-certs podName:1d9259cd-7490-4a4f-b09c-db6d25fadf0e nodeName:}" failed. No retries permitted until 2026-02-17 00:10:31.191451761 +0000 UTC m=+102.523006519 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d9259cd-7490-4a4f-b09c-db6d25fadf0e-metrics-certs") pod "network-metrics-daemon-t9gkm" (UID: "1d9259cd-7490-4a4f-b09c-db6d25fadf0e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.691349 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7m8bp\" (UniqueName: \"kubernetes.io/projected/aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df-kube-api-access-7m8bp\") pod \"ovnkube-control-plane-57b78d8988-d9cml\" (UID: \"aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d9cml" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.691214 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-run-openvswitch\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.691529 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-multus-conf-dir\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.691551 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsmcd\" (UniqueName: \"kubernetes.io/projected/68075fa3-1f79-4a7c-ba56-8a8cfd0a9be5-kube-api-access-dsmcd\") pod \"node-ca-lxqdh\" (UID: \"68075fa3-1f79-4a7c-ba56-8a8cfd0a9be5\") " pod="openshift-image-registry/node-ca-lxqdh" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.691570 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-d9cml\" (UID: \"aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d9cml" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.691587 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b5536526-72ad-4b54-98bf-7b293cbf26ab-hosts-file\") pod \"node-resolver-lscz2\" (UID: \"b5536526-72ad-4b54-98bf-7b293cbf26ab\") " pod="openshift-dns/node-resolver-lscz2" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.691620 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5867f26a-eddd-4d0b-bfa3-e7c68e976330-mcd-auth-proxy-config\") pod \"machine-config-daemon-hjvm4\" (UID: \"5867f26a-eddd-4d0b-bfa3-e7c68e976330\") " pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.691639 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-system-cni-dir\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.691656 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5867f26a-eddd-4d0b-bfa3-e7c68e976330-rootfs\") pod \"machine-config-daemon-hjvm4\" (UID: \"5867f26a-eddd-4d0b-bfa3-e7c68e976330\") " pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.690413 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-multus-cni-dir\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.691810 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fn449\" (UniqueName: \"kubernetes.io/projected/5867f26a-eddd-4d0b-bfa3-e7c68e976330-kube-api-access-fn449\") pod \"machine-config-daemon-hjvm4\" (UID: \"5867f26a-eddd-4d0b-bfa3-e7c68e976330\") " pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.691831 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8cb0bf5c-93dc-47f6-9e86-3071c8865bbb-cni-binary-copy\") pod \"multus-additional-cni-plugins-9hkt8\" (UID: \"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb\") " pod="openshift-multus/multus-additional-cni-plugins-9hkt8" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.691849 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/8cb0bf5c-93dc-47f6-9e86-3071c8865bbb-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-9hkt8\" (UID: \"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb\") " pod="openshift-multus/multus-additional-cni-plugins-9hkt8" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.691874 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-run-systemd\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.691891 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/900bd7e9-9e0a-4472-9882-1a0b3e829007-ovn-node-metrics-cert\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.691907 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8cb0bf5c-93dc-47f6-9e86-3071c8865bbb-cnibin\") pod \"multus-additional-cni-plugins-9hkt8\" (UID: \"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb\") " pod="openshift-multus/multus-additional-cni-plugins-9hkt8" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.691925 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-cni-bin\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.691943 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-d9cml\" (UID: \"aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d9cml" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.692054 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-multus-conf-dir\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.692101 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-etc-openvswitch\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.691306 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/900bd7e9-9e0a-4472-9882-1a0b3e829007-ovnkube-script-lib\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.692565 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8cb0bf5c-93dc-47f6-9e86-3071c8865bbb-cni-binary-copy\") pod \"multus-additional-cni-plugins-9hkt8\" (UID: \"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb\") " pod="openshift-multus/multus-additional-cni-plugins-9hkt8" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.693331 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a1a466bd-accd-4381-b1f0-357d6e20410e-multus-daemon-config\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.693615 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.694340 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-etc-kubernetes\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.695027 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b5536526-72ad-4b54-98bf-7b293cbf26ab-hosts-file\") pod \"node-resolver-lscz2\" (UID: \"b5536526-72ad-4b54-98bf-7b293cbf26ab\") " pod="openshift-dns/node-resolver-lscz2" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.695114 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8cb0bf5c-93dc-47f6-9e86-3071c8865bbb-cnibin\") pod \"multus-additional-cni-plugins-9hkt8\" (UID: \"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb\") " pod="openshift-multus/multus-additional-cni-plugins-9hkt8" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.695166 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-run-systemd\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.695580 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5867f26a-eddd-4d0b-bfa3-e7c68e976330-rootfs\") pod \"machine-config-daemon-hjvm4\" (UID: \"5867f26a-eddd-4d0b-bfa3-e7c68e976330\") " pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.695958 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-cni-bin\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.696100 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8cb0bf5c-93dc-47f6-9e86-3071c8865bbb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9hkt8\" (UID: \"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb\") " pod="openshift-multus/multus-additional-cni-plugins-9hkt8" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.696518 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-d9cml\" (UID: \"aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d9cml" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.696670 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-system-cni-dir\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.696708 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/900bd7e9-9e0a-4472-9882-1a0b3e829007-env-overrides\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.696773 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-host-run-k8s-cni-cncf-io\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.696781 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8cb0bf5c-93dc-47f6-9e86-3071c8865bbb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9hkt8\" (UID: \"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb\") " pod="openshift-multus/multus-additional-cni-plugins-9hkt8" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.696812 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-host-run-multus-certs\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.697136 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-host-run-multus-certs\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.697815 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5867f26a-eddd-4d0b-bfa3-e7c68e976330-mcd-auth-proxy-config\") pod \"machine-config-daemon-hjvm4\" (UID: \"5867f26a-eddd-4d0b-bfa3-e7c68e976330\") " pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.697862 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a1a466bd-accd-4381-b1f0-357d6e20410e-host-run-k8s-cni-cncf-io\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.698052 5109 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.699719 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.699738 5109 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.699756 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.699771 5109 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.699784 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.699802 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.699816 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.699829 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.699843 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.699856 5109 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.699868 5109 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.699880 5109 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.699893 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.699906 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.699918 5109 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.699930 5109 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.699942 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.699956 5109 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.699968 5109 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.699982 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.699995 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.700009 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.700022 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.700035 5109 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.700047 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.700059 5109 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.700072 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.700086 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.700103 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.698134 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5867f26a-eddd-4d0b-bfa3-e7c68e976330-proxy-tls\") pod \"machine-config-daemon-hjvm4\" (UID: \"5867f26a-eddd-4d0b-bfa3-e7c68e976330\") " pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.701632 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/8cb0bf5c-93dc-47f6-9e86-3071c8865bbb-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-9hkt8\" (UID: \"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb\") " pod="openshift-multus/multus-additional-cni-plugins-9hkt8" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.703720 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/68075fa3-1f79-4a7c-ba56-8a8cfd0a9be5-serviceca\") pod \"node-ca-lxqdh\" (UID: \"68075fa3-1f79-4a7c-ba56-8a8cfd0a9be5\") " pod="openshift-image-registry/node-ca-lxqdh" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.705662 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-d9cml\" (UID: \"aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d9cml" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.708462 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9hkt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-crnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9hkt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.710706 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a1a466bd-accd-4381-b1f0-357d6e20410e-cni-binary-copy\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.715875 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-d9cml\" (UID: \"aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d9cml" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.716096 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5lnz\" (UniqueName: \"kubernetes.io/projected/b5536526-72ad-4b54-98bf-7b293cbf26ab-kube-api-access-n5lnz\") pod \"node-resolver-lscz2\" (UID: \"b5536526-72ad-4b54-98bf-7b293cbf26ab\") " pod="openshift-dns/node-resolver-lscz2" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.716722 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsmcd\" (UniqueName: \"kubernetes.io/projected/68075fa3-1f79-4a7c-ba56-8a8cfd0a9be5-kube-api-access-dsmcd\") pod \"node-ca-lxqdh\" (UID: \"68075fa3-1f79-4a7c-ba56-8a8cfd0a9be5\") " pod="openshift-image-registry/node-ca-lxqdh" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.716720 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrjnj\" (UniqueName: \"kubernetes.io/projected/1d9259cd-7490-4a4f-b09c-db6d25fadf0e-kube-api-access-wrjnj\") pod \"network-metrics-daemon-t9gkm\" (UID: \"1d9259cd-7490-4a4f-b09c-db6d25fadf0e\") " pod="openshift-multus/network-metrics-daemon-t9gkm" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.717664 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6jfd\" (UniqueName: \"kubernetes.io/projected/900bd7e9-9e0a-4472-9882-1a0b3e829007-kube-api-access-b6jfd\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.718256 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7hjg\" (UniqueName: \"kubernetes.io/projected/a1a466bd-accd-4381-b1f0-357d6e20410e-kube-api-access-q7hjg\") pod \"multus-bbh4j\" (UID: \"a1a466bd-accd-4381-b1f0-357d6e20410e\") " pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.718417 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn449\" (UniqueName: \"kubernetes.io/projected/5867f26a-eddd-4d0b-bfa3-e7c68e976330-kube-api-access-fn449\") pod \"machine-config-daemon-hjvm4\" (UID: \"5867f26a-eddd-4d0b-bfa3-e7c68e976330\") " pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.719249 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/900bd7e9-9e0a-4472-9882-1a0b3e829007-ovn-node-metrics-cert\") pod \"ovnkube-node-5wnz5\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.720656 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-crnc2\" (UniqueName: \"kubernetes.io/projected/8cb0bf5c-93dc-47f6-9e86-3071c8865bbb-kube-api-access-crnc2\") pod \"multus-additional-cni-plugins-9hkt8\" (UID: \"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb\") " pod="openshift-multus/multus-additional-cni-plugins-9hkt8" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.721306 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m8bp\" (UniqueName: \"kubernetes.io/projected/aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df-kube-api-access-7m8bp\") pod \"ovnkube-control-plane-57b78d8988-d9cml\" (UID: \"aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d9cml" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.722335 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5867f26a-eddd-4d0b-bfa3-e7c68e976330\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fn449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fn449\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:10:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hjvm4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.732693 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d9cml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m8bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7m8bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-d9cml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.744681 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d30b138-f18b-4f7c-b73f-d35ade3012e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:08:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:08:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://a0ea5c7fda47fd144d19a40ef150d8139d28fa518cc105121ef56acaa6a89144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:08:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6da90479a27e9f43b61bf6fbe3714fcc9ba9c95fef7ebdd87c26d235bcfe52db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:08:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d2a3c7b462f74b168b5fe77dc689e662f7b9e5e31f3a142f5b5d291d787b7a30\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:08:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2714abd11f9fcc5bf7a3130bf396a8407088c58ac080791afb94faa2aeb1d8ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2714abd11f9fcc5bf7a3130bf396a8407088c58ac080791afb94faa2aeb1d8ef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T00:09:58Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW0217 00:09:57.810386 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0217 00:09:57.810585 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0217 00:09:57.811554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4257237490/tls.crt::/tmp/serving-cert-4257237490/tls.key\\\\\\\"\\\\nI0217 00:09:58.358215 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 00:09:58.360245 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 00:09:58.360263 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 00:09:58.360290 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 00:09:58.360300 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 00:09:58.363763 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 00:09:58.363785 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:09:58.363790 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 00:09:58.363795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 00:09:58.363799 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 00:09:58.363802 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 00:09:58.363806 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 00:09:58.363836 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 00:09:58.366681 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T00:09:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://af55cbbead47c368e6b0f722919904deb3d9d48d3adce23a62ad46c724d958d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:08:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ca21493ca46d81a601c2764ee7fb2c01a69162decccae8714c7a03d21c681335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca21493ca46d81a601c2764ee7fb2c01a69162decccae8714c7a03d21c681335\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:08:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:08:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:08:49Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.745754 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.745819 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.745838 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.745862 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.745878 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:30Z","lastTransitionTime":"2026-02-17T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.756638 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.756658 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.767242 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.771746 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.786433 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900bd7e9-9e0a-4472-9882-1a0b3e829007\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6jfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6jfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6jfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6jfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6jfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6jfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6jfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6jfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6jfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:10:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5wnz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.788696 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 17 00:10:30 crc kubenswrapper[5109]: W0217 00:10:30.792250 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-6b369f7aef3a91bbd9ef747dcd90f8a47bc76facd2bb2c4cae54daca050232fa WatchSource:0}: Error finding container 6b369f7aef3a91bbd9ef747dcd90f8a47bc76facd2bb2c4cae54daca050232fa: Status 404 returned error can't find the container with id 6b369f7aef3a91bbd9ef747dcd90f8a47bc76facd2bb2c4cae54daca050232fa Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.798818 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-bbh4j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1a466bd-accd-4381-b1f0-357d6e20410e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q7hjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:10:30Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bbh4j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.798874 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.809837 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"018da1af-c718-4880-a14d-8760d8f7b267\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T00:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c46f62e403fea3cdc8be34db5c36b31a122fe6d78a6308dc28830431c5dc7b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:08:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2b47604e1052f54fb2275dce07a558601c0b4bfad7005f60c44e8dce92e3005c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:08:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://469a7ec86233162cfa2e546021f0055071466a80a75fa48b2a3473e405edc680\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T00:08:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://55504d04baf9b16a365257fe21ede930d563e3b29efbd5d90657b03324866a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55504d04baf9b16a365257fe21ede930d563e3b29efbd5d90657b03324866a57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T00:08:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T00:08:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T00:08:49Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.811142 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:30 crc kubenswrapper[5109]: W0217 00:10:30.812337 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-24ae768c6eebb11af7b833bccf226c707ac53336aa8c4c389375ddb31de79a7c WatchSource:0}: Error finding container 24ae768c6eebb11af7b833bccf226c707ac53336aa8c4c389375ddb31de79a7c: Status 404 returned error can't find the container with id 24ae768c6eebb11af7b833bccf226c707ac53336aa8c4c389375ddb31de79a7c Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.820423 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lxqdh" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.825790 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.829370 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bbh4j" Feb 17 00:10:30 crc kubenswrapper[5109]: W0217 00:10:30.831497 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod900bd7e9_9e0a_4472_9882_1a0b3e829007.slice/crio-70b5a3895f34cadc5c19731a662f705a995c5a83c7a31f0b2f61c4a9a7cc83f6 WatchSource:0}: Error finding container 70b5a3895f34cadc5c19731a662f705a995c5a83c7a31f0b2f61c4a9a7cc83f6: Status 404 returned error can't find the container with id 70b5a3895f34cadc5c19731a662f705a995c5a83c7a31f0b2f61c4a9a7cc83f6 Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.839987 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-lscz2" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.844115 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-9hkt8" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.848416 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.851172 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.851225 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.851238 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.851255 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.851268 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:30Z","lastTransitionTime":"2026-02-17T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.852669 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d9cml" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.854070 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"dcf8e541171f23afb04d8cdf3e2a02ba2302d90886285d0c9df9316f55021936"} Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.856330 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" event={"ID":"5867f26a-eddd-4d0b-bfa3-e7c68e976330","Type":"ContainerStarted","Data":"1fecd3566b46a6902f49713f3330b527dd71edc33e0423ed075b8c2d1287d595"} Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.864654 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T00:10:30Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.870973 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"24ae768c6eebb11af7b833bccf226c707ac53336aa8c4c389375ddb31de79a7c"} Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.874714 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" event={"ID":"900bd7e9-9e0a-4472-9882-1a0b3e829007","Type":"ContainerStarted","Data":"70b5a3895f34cadc5c19731a662f705a995c5a83c7a31f0b2f61c4a9a7cc83f6"} Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.876325 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"6b369f7aef3a91bbd9ef747dcd90f8a47bc76facd2bb2c4cae54daca050232fa"} Feb 17 00:10:30 crc kubenswrapper[5109]: W0217 00:10:30.892208 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5536526_72ad_4b54_98bf_7b293cbf26ab.slice/crio-545bdb63be99ed1fc1b408c9c01dd59585e4da6a5a11f368d21b1da308224579 WatchSource:0}: Error finding container 545bdb63be99ed1fc1b408c9c01dd59585e4da6a5a11f368d21b1da308224579: Status 404 returned error can't find the container with id 545bdb63be99ed1fc1b408c9c01dd59585e4da6a5a11f368d21b1da308224579 Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.952828 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.952868 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.952877 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.952891 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:30 crc kubenswrapper[5109]: I0217 00:10:30.952901 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:30Z","lastTransitionTime":"2026-02-17T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.055130 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.055171 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.055180 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.055195 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.055204 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:31Z","lastTransitionTime":"2026-02-17T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.108931 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.109061 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 17 00:10:31 crc kubenswrapper[5109]: E0217 00:10:31.109099 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:32.109069972 +0000 UTC m=+103.440624730 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:31 crc kubenswrapper[5109]: E0217 00:10:31.109163 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 00:10:31 crc kubenswrapper[5109]: E0217 00:10:31.109179 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 00:10:31 crc kubenswrapper[5109]: E0217 00:10:31.109189 5109 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.109222 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 17 00:10:31 crc kubenswrapper[5109]: E0217 00:10:31.109231 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-17 00:10:32.109217565 +0000 UTC m=+103.440772323 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.109255 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.109291 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 17 00:10:31 crc kubenswrapper[5109]: E0217 00:10:31.109358 5109 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 00:10:31 crc kubenswrapper[5109]: E0217 00:10:31.109395 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-17 00:10:32.1093891 +0000 UTC m=+103.440943858 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 00:10:31 crc kubenswrapper[5109]: E0217 00:10:31.109505 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 00:10:31 crc kubenswrapper[5109]: E0217 00:10:31.109516 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 00:10:31 crc kubenswrapper[5109]: E0217 00:10:31.109523 5109 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:10:31 crc kubenswrapper[5109]: E0217 00:10:31.109548 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-17 00:10:32.109539914 +0000 UTC m=+103.441094672 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:10:31 crc kubenswrapper[5109]: E0217 00:10:31.110423 5109 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 00:10:31 crc kubenswrapper[5109]: E0217 00:10:31.110456 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-17 00:10:32.110448988 +0000 UTC m=+103.442003746 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.157560 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.157608 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.157617 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.157631 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.157642 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:31Z","lastTransitionTime":"2026-02-17T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.210879 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d9259cd-7490-4a4f-b09c-db6d25fadf0e-metrics-certs\") pod \"network-metrics-daemon-t9gkm\" (UID: \"1d9259cd-7490-4a4f-b09c-db6d25fadf0e\") " pod="openshift-multus/network-metrics-daemon-t9gkm" Feb 17 00:10:31 crc kubenswrapper[5109]: E0217 00:10:31.211029 5109 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 00:10:31 crc kubenswrapper[5109]: E0217 00:10:31.211101 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d9259cd-7490-4a4f-b09c-db6d25fadf0e-metrics-certs podName:1d9259cd-7490-4a4f-b09c-db6d25fadf0e nodeName:}" failed. No retries permitted until 2026-02-17 00:10:32.211082931 +0000 UTC m=+103.542637689 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d9259cd-7490-4a4f-b09c-db6d25fadf0e-metrics-certs") pod "network-metrics-daemon-t9gkm" (UID: "1d9259cd-7490-4a4f-b09c-db6d25fadf0e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.260096 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.260137 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.260155 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.260169 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.260179 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:31Z","lastTransitionTime":"2026-02-17T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.362234 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.362273 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.362286 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.362303 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.362314 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:31Z","lastTransitionTime":"2026-02-17T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.464757 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.464801 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.464812 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.464827 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.464838 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:31Z","lastTransitionTime":"2026-02-17T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.468551 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.469561 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.472262 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.474072 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.478282 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.484827 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.485790 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.489235 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.489878 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.491670 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.493035 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.496025 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.497052 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.498898 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.499314 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.500752 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.501472 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.504011 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.506078 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.507547 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.509049 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.514080 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.515521 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.517528 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.518633 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.520461 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.522162 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.522878 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.525733 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.526264 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.529260 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.531320 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.537257 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.547308 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.548052 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.548962 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.554891 5109 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.554994 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.559233 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.562177 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.563473 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.565274 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.565895 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.567238 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.567285 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.567300 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.567317 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.567449 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.567329 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:31Z","lastTransitionTime":"2026-02-17T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.568283 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.569335 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.570376 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.572939 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.574693 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.575930 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.577277 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.579758 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.580962 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.582481 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.584380 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.585088 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.586280 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.588192 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.671148 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.671219 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.671237 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.671276 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.671295 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:31Z","lastTransitionTime":"2026-02-17T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.773021 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.773065 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.773079 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.773094 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.773104 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:31Z","lastTransitionTime":"2026-02-17T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.875194 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.875244 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.875257 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.875273 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.875287 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:31Z","lastTransitionTime":"2026-02-17T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.892321 5109 generic.go:358] "Generic (PLEG): container finished" podID="8cb0bf5c-93dc-47f6-9e86-3071c8865bbb" containerID="dcf39dd772f2c6cba55688cb81c79937c0935abc2597e779b83f1b8a7ad3a350" exitCode=0 Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.892438 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9hkt8" event={"ID":"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb","Type":"ContainerDied","Data":"dcf39dd772f2c6cba55688cb81c79937c0935abc2597e779b83f1b8a7ad3a350"} Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.892510 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9hkt8" event={"ID":"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb","Type":"ContainerStarted","Data":"1647c64fd40289e798e055a99ac5cf963ba5da50227aac53eee7f6657c7f3adc"} Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.897700 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bbh4j" event={"ID":"a1a466bd-accd-4381-b1f0-357d6e20410e","Type":"ContainerStarted","Data":"db4ea6daea6acf078ea6b5f81ae1a7478dee8360368d4c8db797447141483453"} Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.897799 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bbh4j" event={"ID":"a1a466bd-accd-4381-b1f0-357d6e20410e","Type":"ContainerStarted","Data":"2856afa7d3a6ec591b340f21a526bb0658bd91f5e5906c64f99dd76505a8f5ba"} Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.900190 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-lscz2" event={"ID":"b5536526-72ad-4b54-98bf-7b293cbf26ab","Type":"ContainerStarted","Data":"2b6906084fc549110f97c9d4bb6147368b49e9d7d8b78758c6e2275f00ee9984"} Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.900273 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-lscz2" event={"ID":"b5536526-72ad-4b54-98bf-7b293cbf26ab","Type":"ContainerStarted","Data":"545bdb63be99ed1fc1b408c9c01dd59585e4da6a5a11f368d21b1da308224579"} Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.902234 5109 generic.go:358] "Generic (PLEG): container finished" podID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerID="a999b316c34b2141d21782714543fb68e71868aa533f8d62258829bda33ce87f" exitCode=0 Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.902343 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" event={"ID":"900bd7e9-9e0a-4472-9882-1a0b3e829007","Type":"ContainerDied","Data":"a999b316c34b2141d21782714543fb68e71868aa533f8d62258829bda33ce87f"} Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.906625 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"801355ca2b30b7173b571869eef0d3a05011a33172fece1394d4759ca91dcf8d"} Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.906690 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"1bc00e80cc91f57a8cc28dd24c630288d2bc03af3329abce24d26a2fbb930a27"} Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.908489 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"cc33f776a2bad890bdcc220f47a521b239b0767c28c4d1962a833ed4ee84d474"} Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.911022 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lxqdh" event={"ID":"68075fa3-1f79-4a7c-ba56-8a8cfd0a9be5","Type":"ContainerStarted","Data":"8c8f0e13dd29eb1c02e3e11820a087c177bd73816adadf96f610e5b8b066e203"} Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.911073 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lxqdh" event={"ID":"68075fa3-1f79-4a7c-ba56-8a8cfd0a9be5","Type":"ContainerStarted","Data":"d09e0b03ac170e223275819d1c02c616e8ed4ee4fee3faa3524f5324ae67079e"} Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.913768 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" event={"ID":"5867f26a-eddd-4d0b-bfa3-e7c68e976330","Type":"ContainerStarted","Data":"553628239da640e82e330819ba318cb5855d3bc89e97ab3d7406153f91242c0f"} Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.913820 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" event={"ID":"5867f26a-eddd-4d0b-bfa3-e7c68e976330","Type":"ContainerStarted","Data":"7981733834e5113824e0605b264ad8ffcb2706e3cea14ef7eaf54cb2b20e2859"} Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.917260 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d9cml" event={"ID":"aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df","Type":"ContainerStarted","Data":"14033009baab184f3748e60e4924fcc69138f6f45662537d53ff835ad32aa323"} Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.917324 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d9cml" event={"ID":"aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df","Type":"ContainerStarted","Data":"ea1ad12a61b9c4366fe0473f416a58533b1cca24e29cccc74cc9cc79de87cc1d"} Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.917344 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d9cml" event={"ID":"aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df","Type":"ContainerStarted","Data":"f6f46e4f1b97c43f7c86a12a1a039b5937adc2498267cb2477c08c969954582a"} Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.977189 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.977583 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.977624 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.977647 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:31 crc kubenswrapper[5109]: I0217 00:10:31.977664 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:31Z","lastTransitionTime":"2026-02-17T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.080648 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.080683 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.080692 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.080705 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.080713 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:32Z","lastTransitionTime":"2026-02-17T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.107654 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=2.107634413 podStartE2EDuration="2.107634413s" podCreationTimestamp="2026-02-17 00:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:32.107408777 +0000 UTC m=+103.438963545" watchObservedRunningTime="2026-02-17 00:10:32.107634413 +0000 UTC m=+103.439189171" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.123301 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:32 crc kubenswrapper[5109]: E0217 00:10:32.124379 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:34.124357492 +0000 UTC m=+105.455912250 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.125544 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.125585 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.125646 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.125677 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 17 00:10:32 crc kubenswrapper[5109]: E0217 00:10:32.125853 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 00:10:32 crc kubenswrapper[5109]: E0217 00:10:32.125879 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 00:10:32 crc kubenswrapper[5109]: E0217 00:10:32.125890 5109 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:10:32 crc kubenswrapper[5109]: E0217 00:10:32.125946 5109 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 00:10:32 crc kubenswrapper[5109]: E0217 00:10:32.125959 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-17 00:10:34.125941414 +0000 UTC m=+105.457496172 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:10:32 crc kubenswrapper[5109]: E0217 00:10:32.126041 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-17 00:10:34.126023616 +0000 UTC m=+105.457578374 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 00:10:32 crc kubenswrapper[5109]: E0217 00:10:32.126078 5109 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 00:10:32 crc kubenswrapper[5109]: E0217 00:10:32.126101 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-17 00:10:34.126094198 +0000 UTC m=+105.457648956 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 00:10:32 crc kubenswrapper[5109]: E0217 00:10:32.126276 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 00:10:32 crc kubenswrapper[5109]: E0217 00:10:32.126392 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 00:10:32 crc kubenswrapper[5109]: E0217 00:10:32.126696 5109 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:10:32 crc kubenswrapper[5109]: E0217 00:10:32.126895 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-17 00:10:34.126877068 +0000 UTC m=+105.458431826 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.183413 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.183446 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.183455 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.183470 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.183479 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:32Z","lastTransitionTime":"2026-02-17T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.206452 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=2.206430848 podStartE2EDuration="2.206430848s" podCreationTimestamp="2026-02-17 00:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:32.20611466 +0000 UTC m=+103.537669438" watchObservedRunningTime="2026-02-17 00:10:32.206430848 +0000 UTC m=+103.537985616" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.225740 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=2.2257228749999998 podStartE2EDuration="2.225722875s" podCreationTimestamp="2026-02-17 00:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:32.223926418 +0000 UTC m=+103.555481176" watchObservedRunningTime="2026-02-17 00:10:32.225722875 +0000 UTC m=+103.557277633" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.226353 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d9259cd-7490-4a4f-b09c-db6d25fadf0e-metrics-certs\") pod \"network-metrics-daemon-t9gkm\" (UID: \"1d9259cd-7490-4a4f-b09c-db6d25fadf0e\") " pod="openshift-multus/network-metrics-daemon-t9gkm" Feb 17 00:10:32 crc kubenswrapper[5109]: E0217 00:10:32.226651 5109 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 00:10:32 crc kubenswrapper[5109]: E0217 00:10:32.227615 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d9259cd-7490-4a4f-b09c-db6d25fadf0e-metrics-certs podName:1d9259cd-7490-4a4f-b09c-db6d25fadf0e nodeName:}" failed. No retries permitted until 2026-02-17 00:10:34.227573794 +0000 UTC m=+105.559128552 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d9259cd-7490-4a4f-b09c-db6d25fadf0e-metrics-certs") pod "network-metrics-daemon-t9gkm" (UID: "1d9259cd-7490-4a4f-b09c-db6d25fadf0e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.236514 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=2.236494318 podStartE2EDuration="2.236494318s" podCreationTimestamp="2026-02-17 00:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:32.236021676 +0000 UTC m=+103.567576434" watchObservedRunningTime="2026-02-17 00:10:32.236494318 +0000 UTC m=+103.568049086" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.285006 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.285052 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.285062 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.285076 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.285087 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:32Z","lastTransitionTime":"2026-02-17T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.298343 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-bbh4j" podStartSLOduration=82.298324762 podStartE2EDuration="1m22.298324762s" podCreationTimestamp="2026-02-17 00:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:32.297747807 +0000 UTC m=+103.629302565" watchObservedRunningTime="2026-02-17 00:10:32.298324762 +0000 UTC m=+103.629879530" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.326965 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-lscz2" podStartSLOduration=82.326943464 podStartE2EDuration="1m22.326943464s" podCreationTimestamp="2026-02-17 00:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:32.326248716 +0000 UTC m=+103.657803484" watchObservedRunningTime="2026-02-17 00:10:32.326943464 +0000 UTC m=+103.658498242" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.385195 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-lxqdh" podStartSLOduration=82.385171484 podStartE2EDuration="1m22.385171484s" podCreationTimestamp="2026-02-17 00:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:32.371143955 +0000 UTC m=+103.702698723" watchObservedRunningTime="2026-02-17 00:10:32.385171484 +0000 UTC m=+103.716726242" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.385663 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" podStartSLOduration=82.385654756 podStartE2EDuration="1m22.385654756s" podCreationTimestamp="2026-02-17 00:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:32.385078351 +0000 UTC m=+103.716633109" watchObservedRunningTime="2026-02-17 00:10:32.385654756 +0000 UTC m=+103.717209534" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.388623 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.388653 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.388661 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.388684 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.388694 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:32Z","lastTransitionTime":"2026-02-17T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.463918 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t9gkm" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.463953 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 17 00:10:32 crc kubenswrapper[5109]: E0217 00:10:32.464332 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t9gkm" podUID="1d9259cd-7490-4a4f-b09c-db6d25fadf0e" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.464042 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 17 00:10:32 crc kubenswrapper[5109]: E0217 00:10:32.464526 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 17 00:10:32 crc kubenswrapper[5109]: E0217 00:10:32.464387 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.463989 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 17 00:10:32 crc kubenswrapper[5109]: E0217 00:10:32.464782 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.490504 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.490745 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.490805 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.490881 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.490945 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:32Z","lastTransitionTime":"2026-02-17T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.593540 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.593960 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.594117 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.594261 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.594389 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:32Z","lastTransitionTime":"2026-02-17T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.696358 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.696686 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.696771 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.696874 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.696958 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:32Z","lastTransitionTime":"2026-02-17T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.799612 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.800067 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.800082 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.800104 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.800118 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:32Z","lastTransitionTime":"2026-02-17T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.902316 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.902377 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.902390 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.902406 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.902417 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:32Z","lastTransitionTime":"2026-02-17T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.924706 5109 generic.go:358] "Generic (PLEG): container finished" podID="8cb0bf5c-93dc-47f6-9e86-3071c8865bbb" containerID="a932a6670b409d748abb0f6f16d6bf4e1c22ef20272fef8fe5910d421884ac45" exitCode=0 Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.924829 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9hkt8" event={"ID":"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb","Type":"ContainerDied","Data":"a932a6670b409d748abb0f6f16d6bf4e1c22ef20272fef8fe5910d421884ac45"} Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.933059 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" event={"ID":"900bd7e9-9e0a-4472-9882-1a0b3e829007","Type":"ContainerStarted","Data":"625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59"} Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.933115 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" event={"ID":"900bd7e9-9e0a-4472-9882-1a0b3e829007","Type":"ContainerStarted","Data":"65879a7aa8750fbe27f901d53c17be1ef272c49e93f784c3aa12f6a56a9a71db"} Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.933138 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" event={"ID":"900bd7e9-9e0a-4472-9882-1a0b3e829007","Type":"ContainerStarted","Data":"25022737d2581a98213f0c7ec1277f45ff3981f42c3b92dcd2e74018c5b0eb7f"} Feb 17 00:10:32 crc kubenswrapper[5109]: I0217 00:10:32.954321 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d9cml" podStartSLOduration=81.954295324 podStartE2EDuration="1m21.954295324s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:32.400226569 +0000 UTC m=+103.731781347" watchObservedRunningTime="2026-02-17 00:10:32.954295324 +0000 UTC m=+104.285850122" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.006217 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.006275 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.006293 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.006366 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.006401 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:33Z","lastTransitionTime":"2026-02-17T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.108163 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.108465 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.108477 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.108497 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.108509 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:33Z","lastTransitionTime":"2026-02-17T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.211143 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.211195 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.211209 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.211228 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.211270 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:33Z","lastTransitionTime":"2026-02-17T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.313308 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.313352 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.313383 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.313400 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.313411 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:33Z","lastTransitionTime":"2026-02-17T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.414940 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.415014 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.415023 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.415067 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.415085 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:33Z","lastTransitionTime":"2026-02-17T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.516859 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.516920 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.516934 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.516954 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.516966 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:33Z","lastTransitionTime":"2026-02-17T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.619360 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.619408 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.619417 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.619431 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.619441 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:33Z","lastTransitionTime":"2026-02-17T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.722065 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.722123 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.722138 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.722156 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.722169 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:33Z","lastTransitionTime":"2026-02-17T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.824096 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.824152 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.824165 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.824182 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.824193 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:33Z","lastTransitionTime":"2026-02-17T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.926437 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.926479 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.926487 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.926501 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.926510 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:33Z","lastTransitionTime":"2026-02-17T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.937771 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"896de00ea5627f9b75c8eca632bd9a27fb1a511ea0e61f77d4cab9fd32a45576"} Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.941791 5109 generic.go:358] "Generic (PLEG): container finished" podID="8cb0bf5c-93dc-47f6-9e86-3071c8865bbb" containerID="34ac79b64a5dce3e493e5566238f91510fb9174beec34a6e664e0a4fe45ca1e9" exitCode=0 Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.941862 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9hkt8" event={"ID":"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb","Type":"ContainerDied","Data":"34ac79b64a5dce3e493e5566238f91510fb9174beec34a6e664e0a4fe45ca1e9"} Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.947030 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" event={"ID":"900bd7e9-9e0a-4472-9882-1a0b3e829007","Type":"ContainerStarted","Data":"a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa"} Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.947072 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" event={"ID":"900bd7e9-9e0a-4472-9882-1a0b3e829007","Type":"ContainerStarted","Data":"3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444"} Feb 17 00:10:33 crc kubenswrapper[5109]: I0217 00:10:33.947084 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" event={"ID":"900bd7e9-9e0a-4472-9882-1a0b3e829007","Type":"ContainerStarted","Data":"4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea"} Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.028733 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.028767 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.028775 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.028787 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.028796 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:34Z","lastTransitionTime":"2026-02-17T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.131351 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.131743 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.131761 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.131780 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.131794 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:34Z","lastTransitionTime":"2026-02-17T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.149928 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.150162 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.150209 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.150270 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.150314 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 17 00:10:34 crc kubenswrapper[5109]: E0217 00:10:34.150513 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 00:10:34 crc kubenswrapper[5109]: E0217 00:10:34.150536 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 00:10:34 crc kubenswrapper[5109]: E0217 00:10:34.150555 5109 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:10:34 crc kubenswrapper[5109]: E0217 00:10:34.150703 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-17 00:10:38.150680291 +0000 UTC m=+109.482235089 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:10:34 crc kubenswrapper[5109]: E0217 00:10:34.150801 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:38.150786173 +0000 UTC m=+109.482340971 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:34 crc kubenswrapper[5109]: E0217 00:10:34.150880 5109 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 00:10:34 crc kubenswrapper[5109]: E0217 00:10:34.150927 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-17 00:10:38.150914397 +0000 UTC m=+109.482469195 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 00:10:34 crc kubenswrapper[5109]: E0217 00:10:34.151008 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 00:10:34 crc kubenswrapper[5109]: E0217 00:10:34.151027 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 00:10:34 crc kubenswrapper[5109]: E0217 00:10:34.151041 5109 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:10:34 crc kubenswrapper[5109]: E0217 00:10:34.151083 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-17 00:10:38.151070441 +0000 UTC m=+109.482625239 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:10:34 crc kubenswrapper[5109]: E0217 00:10:34.151180 5109 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 00:10:34 crc kubenswrapper[5109]: E0217 00:10:34.151269 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-17 00:10:38.151249226 +0000 UTC m=+109.482803984 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.233872 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.234168 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.234318 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.234494 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.234716 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:34Z","lastTransitionTime":"2026-02-17T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.251835 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d9259cd-7490-4a4f-b09c-db6d25fadf0e-metrics-certs\") pod \"network-metrics-daemon-t9gkm\" (UID: \"1d9259cd-7490-4a4f-b09c-db6d25fadf0e\") " pod="openshift-multus/network-metrics-daemon-t9gkm" Feb 17 00:10:34 crc kubenswrapper[5109]: E0217 00:10:34.252052 5109 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 00:10:34 crc kubenswrapper[5109]: E0217 00:10:34.252167 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d9259cd-7490-4a4f-b09c-db6d25fadf0e-metrics-certs podName:1d9259cd-7490-4a4f-b09c-db6d25fadf0e nodeName:}" failed. No retries permitted until 2026-02-17 00:10:38.252139116 +0000 UTC m=+109.583693944 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d9259cd-7490-4a4f-b09c-db6d25fadf0e-metrics-certs") pod "network-metrics-daemon-t9gkm" (UID: "1d9259cd-7490-4a4f-b09c-db6d25fadf0e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.337490 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.337525 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.337534 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.337549 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.337559 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:34Z","lastTransitionTime":"2026-02-17T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.439757 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.439812 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.439829 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.439849 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.439865 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:34Z","lastTransitionTime":"2026-02-17T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.464477 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 17 00:10:34 crc kubenswrapper[5109]: E0217 00:10:34.464676 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.464928 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t9gkm" Feb 17 00:10:34 crc kubenswrapper[5109]: E0217 00:10:34.465129 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t9gkm" podUID="1d9259cd-7490-4a4f-b09c-db6d25fadf0e" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.465200 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 17 00:10:34 crc kubenswrapper[5109]: E0217 00:10:34.465302 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.465356 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 17 00:10:34 crc kubenswrapper[5109]: E0217 00:10:34.465456 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.541889 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.541942 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.541952 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.541965 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.541976 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:34Z","lastTransitionTime":"2026-02-17T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.643919 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.643981 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.643999 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.644024 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.644044 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:34Z","lastTransitionTime":"2026-02-17T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.746365 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.746427 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.746445 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.746479 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.746496 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:34Z","lastTransitionTime":"2026-02-17T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.849217 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.849320 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.849345 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.849374 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.849395 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:34Z","lastTransitionTime":"2026-02-17T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.953427 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.953492 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.953510 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.953533 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.953551 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:34Z","lastTransitionTime":"2026-02-17T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.959477 5109 generic.go:358] "Generic (PLEG): container finished" podID="8cb0bf5c-93dc-47f6-9e86-3071c8865bbb" containerID="900abe69941f9fe3aab6f2135ae446afd8e83d267f26ea48b27a00e160277860" exitCode=0 Feb 17 00:10:34 crc kubenswrapper[5109]: I0217 00:10:34.960298 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9hkt8" event={"ID":"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb","Type":"ContainerDied","Data":"900abe69941f9fe3aab6f2135ae446afd8e83d267f26ea48b27a00e160277860"} Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.058086 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.058160 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.058180 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.058206 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.058226 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:35Z","lastTransitionTime":"2026-02-17T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.160971 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.161025 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.161037 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.161054 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.161068 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:35Z","lastTransitionTime":"2026-02-17T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.268583 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.268695 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.268721 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.268754 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.268877 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:35Z","lastTransitionTime":"2026-02-17T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.370846 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.370906 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.370925 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.370948 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.370963 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:35Z","lastTransitionTime":"2026-02-17T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.473251 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.473306 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.473319 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.473335 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.473347 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:35Z","lastTransitionTime":"2026-02-17T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.575686 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.575931 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.576013 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.576097 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.576210 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:35Z","lastTransitionTime":"2026-02-17T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.678535 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.678585 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.678614 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.678629 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.678639 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:35Z","lastTransitionTime":"2026-02-17T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.781232 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.781287 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.781299 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.781318 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.781333 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:35Z","lastTransitionTime":"2026-02-17T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.883874 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.884934 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.885158 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.885304 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.885433 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:35Z","lastTransitionTime":"2026-02-17T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.966912 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9hkt8" event={"ID":"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb","Type":"ContainerStarted","Data":"f189bd31e268fd1395fe3262d83ce55ba2a9b97d2ab3153d3e65736807257819"} Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.970939 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" event={"ID":"900bd7e9-9e0a-4472-9882-1a0b3e829007","Type":"ContainerStarted","Data":"5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5"} Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.989218 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.989266 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.989278 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.989294 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:35 crc kubenswrapper[5109]: I0217 00:10:35.989308 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:35Z","lastTransitionTime":"2026-02-17T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.091681 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.091731 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.091744 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.091761 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.091778 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:36Z","lastTransitionTime":"2026-02-17T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.214156 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.214198 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.214208 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.214226 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.214236 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:36Z","lastTransitionTime":"2026-02-17T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.316385 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.316438 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.316448 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.316464 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.316475 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:36Z","lastTransitionTime":"2026-02-17T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.418939 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.418976 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.418989 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.419007 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.419021 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:36Z","lastTransitionTime":"2026-02-17T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.464244 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 17 00:10:36 crc kubenswrapper[5109]: E0217 00:10:36.464392 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.464249 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 17 00:10:36 crc kubenswrapper[5109]: E0217 00:10:36.464482 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.464420 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t9gkm" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.464244 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 17 00:10:36 crc kubenswrapper[5109]: E0217 00:10:36.464564 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t9gkm" podUID="1d9259cd-7490-4a4f-b09c-db6d25fadf0e" Feb 17 00:10:36 crc kubenswrapper[5109]: E0217 00:10:36.464698 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.521759 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.521828 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.521844 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.521872 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.521887 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:36Z","lastTransitionTime":"2026-02-17T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.625491 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.625554 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.625573 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.625617 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.625640 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:36Z","lastTransitionTime":"2026-02-17T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.727920 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.727990 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.728009 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.728031 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.728048 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:36Z","lastTransitionTime":"2026-02-17T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.830279 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.830365 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.830390 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.830421 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.830444 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:36Z","lastTransitionTime":"2026-02-17T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.932564 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.932634 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.932648 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.932666 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.932680 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:36Z","lastTransitionTime":"2026-02-17T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.983784 5109 generic.go:358] "Generic (PLEG): container finished" podID="8cb0bf5c-93dc-47f6-9e86-3071c8865bbb" containerID="f189bd31e268fd1395fe3262d83ce55ba2a9b97d2ab3153d3e65736807257819" exitCode=0 Feb 17 00:10:36 crc kubenswrapper[5109]: I0217 00:10:36.983845 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9hkt8" event={"ID":"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb","Type":"ContainerDied","Data":"f189bd31e268fd1395fe3262d83ce55ba2a9b97d2ab3153d3e65736807257819"} Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.034963 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.035040 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.035060 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.035085 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.035105 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:37Z","lastTransitionTime":"2026-02-17T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.137020 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.137062 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.137071 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.137084 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.137095 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:37Z","lastTransitionTime":"2026-02-17T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.238725 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.238807 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.238822 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.238841 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.238853 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:37Z","lastTransitionTime":"2026-02-17T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.341387 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.341447 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.341461 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.341477 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.341488 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:37Z","lastTransitionTime":"2026-02-17T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.444034 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.444161 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.444180 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.444212 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.444235 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:37Z","lastTransitionTime":"2026-02-17T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.546135 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.546406 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.546416 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.546429 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.546438 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:37Z","lastTransitionTime":"2026-02-17T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.650859 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.650922 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.650937 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.650959 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.650974 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:37Z","lastTransitionTime":"2026-02-17T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.753582 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.753701 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.753721 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.753747 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.753764 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:37Z","lastTransitionTime":"2026-02-17T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.857099 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.857184 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.857207 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.857238 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.857265 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:37Z","lastTransitionTime":"2026-02-17T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.959631 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.959681 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.959694 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.959714 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.959726 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:37Z","lastTransitionTime":"2026-02-17T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.992015 5109 generic.go:358] "Generic (PLEG): container finished" podID="8cb0bf5c-93dc-47f6-9e86-3071c8865bbb" containerID="455fc42b8ebd2c2b9d3e929c829b596e9994670901a0c6c15231ec2bac9597df" exitCode=0 Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.992116 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9hkt8" event={"ID":"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb","Type":"ContainerDied","Data":"455fc42b8ebd2c2b9d3e929c829b596e9994670901a0c6c15231ec2bac9597df"} Feb 17 00:10:37 crc kubenswrapper[5109]: I0217 00:10:37.998348 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" event={"ID":"900bd7e9-9e0a-4472-9882-1a0b3e829007","Type":"ContainerStarted","Data":"46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a"} Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.002403 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.002440 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.002459 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.041228 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.041575 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.062114 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.062164 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.062179 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.062200 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.062217 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:38Z","lastTransitionTime":"2026-02-17T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.065738 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" podStartSLOduration=88.065725353 podStartE2EDuration="1m28.065725353s" podCreationTimestamp="2026-02-17 00:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:38.064919682 +0000 UTC m=+109.396474480" watchObservedRunningTime="2026-02-17 00:10:38.065725353 +0000 UTC m=+109.397280131" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.164309 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.164382 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.164397 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.164421 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.164440 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:38Z","lastTransitionTime":"2026-02-17T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.240798 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.240926 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 17 00:10:38 crc kubenswrapper[5109]: E0217 00:10:38.241058 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:46.240999307 +0000 UTC m=+117.572554075 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.241225 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 17 00:10:38 crc kubenswrapper[5109]: E0217 00:10:38.241256 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 00:10:38 crc kubenswrapper[5109]: E0217 00:10:38.241293 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 00:10:38 crc kubenswrapper[5109]: E0217 00:10:38.241308 5109 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:10:38 crc kubenswrapper[5109]: E0217 00:10:38.241379 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.241264 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 17 00:10:38 crc kubenswrapper[5109]: E0217 00:10:38.241392 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-17 00:10:46.241371247 +0000 UTC m=+117.572926085 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:10:38 crc kubenswrapper[5109]: E0217 00:10:38.241435 5109 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 00:10:38 crc kubenswrapper[5109]: E0217 00:10:38.241396 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 00:10:38 crc kubenswrapper[5109]: E0217 00:10:38.241487 5109 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.241507 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 17 00:10:38 crc kubenswrapper[5109]: E0217 00:10:38.241539 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-17 00:10:46.241518191 +0000 UTC m=+117.573072949 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 00:10:38 crc kubenswrapper[5109]: E0217 00:10:38.241573 5109 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 00:10:38 crc kubenswrapper[5109]: E0217 00:10:38.241575 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-17 00:10:46.241567582 +0000 UTC m=+117.573122340 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:10:38 crc kubenswrapper[5109]: E0217 00:10:38.241673 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-17 00:10:46.241663284 +0000 UTC m=+117.573218042 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.266089 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.266129 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.266142 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.266159 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.266171 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:38Z","lastTransitionTime":"2026-02-17T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.343094 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d9259cd-7490-4a4f-b09c-db6d25fadf0e-metrics-certs\") pod \"network-metrics-daemon-t9gkm\" (UID: \"1d9259cd-7490-4a4f-b09c-db6d25fadf0e\") " pod="openshift-multus/network-metrics-daemon-t9gkm" Feb 17 00:10:38 crc kubenswrapper[5109]: E0217 00:10:38.343281 5109 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 00:10:38 crc kubenswrapper[5109]: E0217 00:10:38.343392 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d9259cd-7490-4a4f-b09c-db6d25fadf0e-metrics-certs podName:1d9259cd-7490-4a4f-b09c-db6d25fadf0e nodeName:}" failed. No retries permitted until 2026-02-17 00:10:46.343367666 +0000 UTC m=+117.674922494 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d9259cd-7490-4a4f-b09c-db6d25fadf0e-metrics-certs") pod "network-metrics-daemon-t9gkm" (UID: "1d9259cd-7490-4a4f-b09c-db6d25fadf0e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.368164 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.368224 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.368257 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.368274 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.368286 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:38Z","lastTransitionTime":"2026-02-17T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.463727 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.463774 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.463793 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t9gkm" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.463886 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 17 00:10:38 crc kubenswrapper[5109]: E0217 00:10:38.463900 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 17 00:10:38 crc kubenswrapper[5109]: E0217 00:10:38.464005 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 17 00:10:38 crc kubenswrapper[5109]: E0217 00:10:38.464140 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 17 00:10:38 crc kubenswrapper[5109]: E0217 00:10:38.464226 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t9gkm" podUID="1d9259cd-7490-4a4f-b09c-db6d25fadf0e" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.471133 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.471184 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.471203 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.471222 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.471236 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:38Z","lastTransitionTime":"2026-02-17T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.573907 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.573984 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.574001 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.574021 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.574053 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:38Z","lastTransitionTime":"2026-02-17T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.676691 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.676746 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.676764 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.676787 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.676807 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:38Z","lastTransitionTime":"2026-02-17T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.779865 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.779920 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.779931 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.779948 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.779960 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:38Z","lastTransitionTime":"2026-02-17T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.883099 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.883785 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.883816 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.883888 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.883921 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:38Z","lastTransitionTime":"2026-02-17T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.986014 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.986069 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.986082 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.986104 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:38 crc kubenswrapper[5109]: I0217 00:10:38.986117 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:38Z","lastTransitionTime":"2026-02-17T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.007188 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9hkt8" event={"ID":"8cb0bf5c-93dc-47f6-9e86-3071c8865bbb","Type":"ContainerStarted","Data":"9ec63ea2b9bd7305d3b477c53bc7300f55c8dd03f93d3e29ffb3ed4e38e0359d"} Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.088401 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.088453 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.088466 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.088481 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.088492 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:39Z","lastTransitionTime":"2026-02-17T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.172359 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.172445 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.172473 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.172510 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.172541 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:39Z","lastTransitionTime":"2026-02-17T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.199728 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.199778 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.199792 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.199811 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.199824 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T00:10:39Z","lastTransitionTime":"2026-02-17T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.228728 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-9hkt8" podStartSLOduration=89.228701152 podStartE2EDuration="1m29.228701152s" podCreationTimestamp="2026-02-17 00:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:39.032645072 +0000 UTC m=+110.364199910" watchObservedRunningTime="2026-02-17 00:10:39.228701152 +0000 UTC m=+110.560255950" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.229117 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dr268"] Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.234945 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dr268" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.237463 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.237585 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.237726 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.237729 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.357654 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c570919-eaf7-4909-b80c-229e03ba4bfd-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-dr268\" (UID: \"1c570919-eaf7-4909-b80c-229e03ba4bfd\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dr268" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.357711 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c570919-eaf7-4909-b80c-229e03ba4bfd-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-dr268\" (UID: \"1c570919-eaf7-4909-b80c-229e03ba4bfd\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dr268" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.358060 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1c570919-eaf7-4909-b80c-229e03ba4bfd-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-dr268\" (UID: \"1c570919-eaf7-4909-b80c-229e03ba4bfd\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dr268" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.358323 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1c570919-eaf7-4909-b80c-229e03ba4bfd-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-dr268\" (UID: \"1c570919-eaf7-4909-b80c-229e03ba4bfd\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dr268" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.358383 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1c570919-eaf7-4909-b80c-229e03ba4bfd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-dr268\" (UID: \"1c570919-eaf7-4909-b80c-229e03ba4bfd\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dr268" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.459710 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1c570919-eaf7-4909-b80c-229e03ba4bfd-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-dr268\" (UID: \"1c570919-eaf7-4909-b80c-229e03ba4bfd\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dr268" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.459887 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1c570919-eaf7-4909-b80c-229e03ba4bfd-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-dr268\" (UID: \"1c570919-eaf7-4909-b80c-229e03ba4bfd\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dr268" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.460030 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1c570919-eaf7-4909-b80c-229e03ba4bfd-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-dr268\" (UID: \"1c570919-eaf7-4909-b80c-229e03ba4bfd\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dr268" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.460294 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1c570919-eaf7-4909-b80c-229e03ba4bfd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-dr268\" (UID: \"1c570919-eaf7-4909-b80c-229e03ba4bfd\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dr268" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.460109 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1c570919-eaf7-4909-b80c-229e03ba4bfd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-dr268\" (UID: \"1c570919-eaf7-4909-b80c-229e03ba4bfd\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dr268" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.460424 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c570919-eaf7-4909-b80c-229e03ba4bfd-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-dr268\" (UID: \"1c570919-eaf7-4909-b80c-229e03ba4bfd\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dr268" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.461047 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c570919-eaf7-4909-b80c-229e03ba4bfd-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-dr268\" (UID: \"1c570919-eaf7-4909-b80c-229e03ba4bfd\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dr268" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.461915 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1c570919-eaf7-4909-b80c-229e03ba4bfd-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-dr268\" (UID: \"1c570919-eaf7-4909-b80c-229e03ba4bfd\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dr268" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.464999 5109 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.477135 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c570919-eaf7-4909-b80c-229e03ba4bfd-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-dr268\" (UID: \"1c570919-eaf7-4909-b80c-229e03ba4bfd\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dr268" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.480184 5109 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.496356 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c570919-eaf7-4909-b80c-229e03ba4bfd-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-dr268\" (UID: \"1c570919-eaf7-4909-b80c-229e03ba4bfd\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dr268" Feb 17 00:10:39 crc kubenswrapper[5109]: I0217 00:10:39.549817 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dr268" Feb 17 00:10:39 crc kubenswrapper[5109]: W0217 00:10:39.580844 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c570919_eaf7_4909_b80c_229e03ba4bfd.slice/crio-c01409f666b8f40dc62397bfc537f15bdbe6216b5a2e5c7ae8026324c81a63bd WatchSource:0}: Error finding container c01409f666b8f40dc62397bfc537f15bdbe6216b5a2e5c7ae8026324c81a63bd: Status 404 returned error can't find the container with id c01409f666b8f40dc62397bfc537f15bdbe6216b5a2e5c7ae8026324c81a63bd Feb 17 00:10:40 crc kubenswrapper[5109]: I0217 00:10:40.011622 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dr268" event={"ID":"1c570919-eaf7-4909-b80c-229e03ba4bfd","Type":"ContainerStarted","Data":"17b8431c1629223f8f9237c80dec9e9115ea65c042914f08165310eac656407c"} Feb 17 00:10:40 crc kubenswrapper[5109]: I0217 00:10:40.011717 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dr268" event={"ID":"1c570919-eaf7-4909-b80c-229e03ba4bfd","Type":"ContainerStarted","Data":"c01409f666b8f40dc62397bfc537f15bdbe6216b5a2e5c7ae8026324c81a63bd"} Feb 17 00:10:40 crc kubenswrapper[5109]: I0217 00:10:40.464130 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 17 00:10:40 crc kubenswrapper[5109]: E0217 00:10:40.464726 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 17 00:10:40 crc kubenswrapper[5109]: I0217 00:10:40.464225 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 17 00:10:40 crc kubenswrapper[5109]: E0217 00:10:40.464853 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 17 00:10:40 crc kubenswrapper[5109]: I0217 00:10:40.464244 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t9gkm" Feb 17 00:10:40 crc kubenswrapper[5109]: E0217 00:10:40.464961 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t9gkm" podUID="1d9259cd-7490-4a4f-b09c-db6d25fadf0e" Feb 17 00:10:40 crc kubenswrapper[5109]: I0217 00:10:40.464202 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 17 00:10:40 crc kubenswrapper[5109]: E0217 00:10:40.465056 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 17 00:10:40 crc kubenswrapper[5109]: I0217 00:10:40.473165 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-dr268" podStartSLOduration=90.473135573 podStartE2EDuration="1m30.473135573s" podCreationTimestamp="2026-02-17 00:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:40.025802352 +0000 UTC m=+111.357357120" watchObservedRunningTime="2026-02-17 00:10:40.473135573 +0000 UTC m=+111.804690341" Feb 17 00:10:40 crc kubenswrapper[5109]: I0217 00:10:40.474006 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-t9gkm"] Feb 17 00:10:41 crc kubenswrapper[5109]: I0217 00:10:41.014918 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t9gkm" Feb 17 00:10:41 crc kubenswrapper[5109]: E0217 00:10:41.015095 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t9gkm" podUID="1d9259cd-7490-4a4f-b09c-db6d25fadf0e" Feb 17 00:10:42 crc kubenswrapper[5109]: I0217 00:10:42.464383 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 17 00:10:42 crc kubenswrapper[5109]: E0217 00:10:42.464569 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 17 00:10:42 crc kubenswrapper[5109]: I0217 00:10:42.464660 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 17 00:10:42 crc kubenswrapper[5109]: I0217 00:10:42.464754 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t9gkm" Feb 17 00:10:42 crc kubenswrapper[5109]: E0217 00:10:42.464901 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 17 00:10:42 crc kubenswrapper[5109]: E0217 00:10:42.465039 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t9gkm" podUID="1d9259cd-7490-4a4f-b09c-db6d25fadf0e" Feb 17 00:10:42 crc kubenswrapper[5109]: I0217 00:10:42.465424 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 17 00:10:42 crc kubenswrapper[5109]: E0217 00:10:42.465531 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 17 00:10:42 crc kubenswrapper[5109]: I0217 00:10:42.465940 5109 scope.go:117] "RemoveContainer" containerID="2714abd11f9fcc5bf7a3130bf396a8407088c58ac080791afb94faa2aeb1d8ef" Feb 17 00:10:43 crc kubenswrapper[5109]: I0217 00:10:43.023710 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 17 00:10:43 crc kubenswrapper[5109]: I0217 00:10:43.025763 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"01e8438dd967be87a517b2247647b63d0b0a0ab4809a38166ccd08472ce01674"} Feb 17 00:10:43 crc kubenswrapper[5109]: I0217 00:10:43.026526 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:10:43 crc kubenswrapper[5109]: I0217 00:10:43.069642 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=13.069620218 podStartE2EDuration="13.069620218s" podCreationTimestamp="2026-02-17 00:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:43.055113057 +0000 UTC m=+114.386667845" watchObservedRunningTime="2026-02-17 00:10:43.069620218 +0000 UTC m=+114.401175136" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.464135 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.464185 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 17 00:10:44 crc kubenswrapper[5109]: E0217 00:10:44.464658 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.464309 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.464229 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t9gkm" Feb 17 00:10:44 crc kubenswrapper[5109]: E0217 00:10:44.464909 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 17 00:10:44 crc kubenswrapper[5109]: E0217 00:10:44.465092 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 17 00:10:44 crc kubenswrapper[5109]: E0217 00:10:44.465228 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t9gkm" podUID="1d9259cd-7490-4a4f-b09c-db6d25fadf0e" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.741572 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.741957 5109 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.780358 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn"] Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.784925 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-c5txm"] Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.785097 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.787786 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.788151 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.788394 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.788749 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-xhskt"] Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.791502 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.791781 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.792157 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-xhskt" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.792530 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.795157 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.795357 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.795566 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.797337 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.797586 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.799367 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.801174 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.801545 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.801650 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.801736 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.801921 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.801971 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.802083 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.802166 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.802187 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.802268 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.802551 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.802890 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.803002 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.803028 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.805290 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.805836 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.819942 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.825246 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thjpm\" (UniqueName: \"kubernetes.io/projected/cf4411dd-78f7-458e-b92b-e1670922138d-kube-api-access-thjpm\") pod \"route-controller-manager-776cdc94d6-shrhn\" (UID: \"cf4411dd-78f7-458e-b92b-e1670922138d\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.825299 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-audit-dir\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.825324 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.825355 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-audit-policies\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.825399 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.825478 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4aa0e237-cb03-44d4-bf30-949ab25f2e12-console-config\") pod \"console-64d44f6ddf-xhskt\" (UID: \"4aa0e237-cb03-44d4-bf30-949ab25f2e12\") " pod="openshift-console/console-64d44f6ddf-xhskt" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.825505 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cf4411dd-78f7-458e-b92b-e1670922138d-tmp\") pod \"route-controller-manager-776cdc94d6-shrhn\" (UID: \"cf4411dd-78f7-458e-b92b-e1670922138d\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.825527 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.825547 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4aa0e237-cb03-44d4-bf30-949ab25f2e12-console-oauth-config\") pod \"console-64d44f6ddf-xhskt\" (UID: \"4aa0e237-cb03-44d4-bf30-949ab25f2e12\") " pod="openshift-console/console-64d44f6ddf-xhskt" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.825584 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4aa0e237-cb03-44d4-bf30-949ab25f2e12-service-ca\") pod \"console-64d44f6ddf-xhskt\" (UID: \"4aa0e237-cb03-44d4-bf30-949ab25f2e12\") " pod="openshift-console/console-64d44f6ddf-xhskt" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.825647 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf4411dd-78f7-458e-b92b-e1670922138d-serving-cert\") pod \"route-controller-manager-776cdc94d6-shrhn\" (UID: \"cf4411dd-78f7-458e-b92b-e1670922138d\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.825679 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.825822 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zc52\" (UniqueName: \"kubernetes.io/projected/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-kube-api-access-9zc52\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.825860 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4aa0e237-cb03-44d4-bf30-949ab25f2e12-oauth-serving-cert\") pod \"console-64d44f6ddf-xhskt\" (UID: \"4aa0e237-cb03-44d4-bf30-949ab25f2e12\") " pod="openshift-console/console-64d44f6ddf-xhskt" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.825949 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.826090 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.826228 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkfwv\" (UniqueName: \"kubernetes.io/projected/4aa0e237-cb03-44d4-bf30-949ab25f2e12-kube-api-access-dkfwv\") pod \"console-64d44f6ddf-xhskt\" (UID: \"4aa0e237-cb03-44d4-bf30-949ab25f2e12\") " pod="openshift-console/console-64d44f6ddf-xhskt" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.826321 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf4411dd-78f7-458e-b92b-e1670922138d-client-ca\") pod \"route-controller-manager-776cdc94d6-shrhn\" (UID: \"cf4411dd-78f7-458e-b92b-e1670922138d\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.826440 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-r8nwv"] Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.826470 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.826535 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.826741 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.826943 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4aa0e237-cb03-44d4-bf30-949ab25f2e12-trusted-ca-bundle\") pod \"console-64d44f6ddf-xhskt\" (UID: \"4aa0e237-cb03-44d4-bf30-949ab25f2e12\") " pod="openshift-console/console-64d44f6ddf-xhskt" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.827177 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.827229 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf4411dd-78f7-458e-b92b-e1670922138d-config\") pod \"route-controller-manager-776cdc94d6-shrhn\" (UID: \"cf4411dd-78f7-458e-b92b-e1670922138d\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.827388 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.827434 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4aa0e237-cb03-44d4-bf30-949ab25f2e12-console-serving-cert\") pod \"console-64d44f6ddf-xhskt\" (UID: \"4aa0e237-cb03-44d4-bf30-949ab25f2e12\") " pod="openshift-console/console-64d44f6ddf-xhskt" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.860670 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.864338 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.867418 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-glh8p"] Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.870065 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mhlc4"] Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.870372 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-r8nwv" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.872505 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29521440-n967f"] Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.872735 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-glh8p" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.874692 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-7l95k"] Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.875287 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29521440-n967f" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.876946 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-5pqmv"] Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.877730 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-7l95k" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.879346 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.880564 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7"] Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.881470 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.881531 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-5pqmv" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.881749 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.881885 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.881917 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"serviceca\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.886561 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.881980 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.887000 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.882117 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.882151 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.882257 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.883366 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-l5t5g"] Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.881946 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.883460 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.884288 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.885972 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"pruner-dockercfg-rs58m\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.886326 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.886488 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.886525 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.883670 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.883497 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mhlc4" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.893479 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.893680 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.893930 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.905984 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jxv5"] Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.906316 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.906525 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.906760 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.907054 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.907229 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.907407 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.907721 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.907899 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.908079 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.908320 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.908511 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.908857 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.909012 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.912143 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-dqtqd"] Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.912700 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-l5t5g" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.916247 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.916439 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.916657 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-rgvbj"] Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.916947 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.917347 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jxv5" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.917736 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.928199 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.928870 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-8dzhp"] Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.930855 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-rgvbj" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.937348 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-rw5p4"] Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.937513 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4aa0e237-cb03-44d4-bf30-949ab25f2e12-service-ca\") pod \"console-64d44f6ddf-xhskt\" (UID: \"4aa0e237-cb03-44d4-bf30-949ab25f2e12\") " pod="openshift-console/console-64d44f6ddf-xhskt" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.937605 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bf8b5e00-d02f-4e7f-a49c-b0304f07410b-etcd-client\") pod \"apiserver-8596bd845d-dhxx7\" (UID: \"bf8b5e00-d02f-4e7f-a49c-b0304f07410b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.937651 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf4411dd-78f7-458e-b92b-e1670922138d-serving-cert\") pod \"route-controller-manager-776cdc94d6-shrhn\" (UID: \"cf4411dd-78f7-458e-b92b-e1670922138d\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.937678 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.937813 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9zc52\" (UniqueName: \"kubernetes.io/projected/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-kube-api-access-9zc52\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.937838 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-8dzhp" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.937857 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4aa0e237-cb03-44d4-bf30-949ab25f2e12-oauth-serving-cert\") pod \"console-64d44f6ddf-xhskt\" (UID: \"4aa0e237-cb03-44d4-bf30-949ab25f2e12\") " pod="openshift-console/console-64d44f6ddf-xhskt" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.937884 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.937890 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bf8b5e00-d02f-4e7f-a49c-b0304f07410b-encryption-config\") pod \"apiserver-8596bd845d-dhxx7\" (UID: \"bf8b5e00-d02f-4e7f-a49c-b0304f07410b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.937920 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f199b766-a6b0-42f9-9fd7-a618ba099c59-kube-api-access\") pod \"kube-apiserver-operator-575994946d-5jxv5\" (UID: \"f199b766-a6b0-42f9-9fd7-a618ba099c59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jxv5" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.938027 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.938059 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvtxl\" (UniqueName: \"kubernetes.io/projected/d1ec3e6e-d123-47dd-bd2f-63d924f5129e-kube-api-access-dvtxl\") pod \"openshift-config-operator-5777786469-l5t5g\" (UID: \"d1ec3e6e-d123-47dd-bd2f-63d924f5129e\") " pod="openshift-config-operator/openshift-config-operator-5777786469-l5t5g" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.938198 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de428bc8-27d8-4397-877f-20f8105de9d0-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-glh8p\" (UID: \"de428bc8-27d8-4397-877f-20f8105de9d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-glh8p" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.938229 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de428bc8-27d8-4397-877f-20f8105de9d0-config\") pod \"openshift-controller-manager-operator-686468bdd5-glh8p\" (UID: \"de428bc8-27d8-4397-877f-20f8105de9d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-glh8p" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.938293 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.944627 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-dqtqd" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.950466 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.950618 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.950706 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4aa0e237-cb03-44d4-bf30-949ab25f2e12-oauth-serving-cert\") pod \"console-64d44f6ddf-xhskt\" (UID: \"4aa0e237-cb03-44d4-bf30-949ab25f2e12\") " pod="openshift-console/console-64d44f6ddf-xhskt" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.950789 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.950872 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c9nt\" (UniqueName: \"kubernetes.io/projected/de428bc8-27d8-4397-877f-20f8105de9d0-kube-api-access-5c9nt\") pod \"openshift-controller-manager-operator-686468bdd5-glh8p\" (UID: \"de428bc8-27d8-4397-877f-20f8105de9d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-glh8p" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.951057 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.951390 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.951689 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.951855 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.951971 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2f13cd6d-3c3b-4ed8-b692-cfe56a634a19-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-7l95k\" (UID: \"2f13cd6d-3c3b-4ed8-b692-cfe56a634a19\") " pod="openshift-machine-api/machine-api-operator-755bb95488-7l95k" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952000 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f199b766-a6b0-42f9-9fd7-a618ba099c59-tmp-dir\") pod \"kube-apiserver-operator-575994946d-5jxv5\" (UID: \"f199b766-a6b0-42f9-9fd7-a618ba099c59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jxv5" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952048 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dkfwv\" (UniqueName: \"kubernetes.io/projected/4aa0e237-cb03-44d4-bf30-949ab25f2e12-kube-api-access-dkfwv\") pod \"console-64d44f6ddf-xhskt\" (UID: \"4aa0e237-cb03-44d4-bf30-949ab25f2e12\") " pod="openshift-console/console-64d44f6ddf-xhskt" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952064 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/de428bc8-27d8-4397-877f-20f8105de9d0-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-glh8p\" (UID: \"de428bc8-27d8-4397-877f-20f8105de9d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-glh8p" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952079 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f199b766-a6b0-42f9-9fd7-a618ba099c59-config\") pod \"kube-apiserver-operator-575994946d-5jxv5\" (UID: \"f199b766-a6b0-42f9-9fd7-a618ba099c59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jxv5" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952102 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf4411dd-78f7-458e-b92b-e1670922138d-client-ca\") pod \"route-controller-manager-776cdc94d6-shrhn\" (UID: \"cf4411dd-78f7-458e-b92b-e1670922138d\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952239 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952267 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952298 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1ec3e6e-d123-47dd-bd2f-63d924f5129e-serving-cert\") pod \"openshift-config-operator-5777786469-l5t5g\" (UID: \"d1ec3e6e-d123-47dd-bd2f-63d924f5129e\") " pod="openshift-config-operator/openshift-config-operator-5777786469-l5t5g" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952318 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bf8b5e00-d02f-4e7f-a49c-b0304f07410b-audit-policies\") pod \"apiserver-8596bd845d-dhxx7\" (UID: \"bf8b5e00-d02f-4e7f-a49c-b0304f07410b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952334 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf8b5e00-d02f-4e7f-a49c-b0304f07410b-trusted-ca-bundle\") pod \"apiserver-8596bd845d-dhxx7\" (UID: \"bf8b5e00-d02f-4e7f-a49c-b0304f07410b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952349 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4236f2e-adff-48cd-ad0c-f95a2871ef5b-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-mhlc4\" (UID: \"b4236f2e-adff-48cd-ad0c-f95a2871ef5b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mhlc4" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952366 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b4236f2e-adff-48cd-ad0c-f95a2871ef5b-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-mhlc4\" (UID: \"b4236f2e-adff-48cd-ad0c-f95a2871ef5b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mhlc4" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952384 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952408 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4aa0e237-cb03-44d4-bf30-949ab25f2e12-trusted-ca-bundle\") pod \"console-64d44f6ddf-xhskt\" (UID: \"4aa0e237-cb03-44d4-bf30-949ab25f2e12\") " pod="openshift-console/console-64d44f6ddf-xhskt" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952425 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gknkb\" (UniqueName: \"kubernetes.io/projected/abd1baa1-4b4c-459b-b487-5dd283fe0ad9-kube-api-access-gknkb\") pod \"downloads-747b44746d-5pqmv\" (UID: \"abd1baa1-4b4c-459b-b487-5dd283fe0ad9\") " pod="openshift-console/downloads-747b44746d-5pqmv" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952460 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v7d4\" (UniqueName: \"kubernetes.io/projected/f19c89c8-8db7-461b-bf1f-61133b64a2da-kube-api-access-7v7d4\") pod \"controller-manager-65b6cccf98-r8nwv\" (UID: \"f19c89c8-8db7-461b-bf1f-61133b64a2da\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-r8nwv" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952474 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4236f2e-adff-48cd-ad0c-f95a2871ef5b-config\") pod \"kube-controller-manager-operator-69d5f845f8-mhlc4\" (UID: \"b4236f2e-adff-48cd-ad0c-f95a2871ef5b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mhlc4" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952491 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg2wc\" (UniqueName: \"kubernetes.io/projected/7a08715e-e52f-4251-9b13-72f93eacb031-kube-api-access-tg2wc\") pod \"image-pruner-29521440-n967f\" (UID: \"7a08715e-e52f-4251-9b13-72f93eacb031\") " pod="openshift-image-registry/image-pruner-29521440-n967f" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952517 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zx5j\" (UniqueName: \"kubernetes.io/projected/2f13cd6d-3c3b-4ed8-b692-cfe56a634a19-kube-api-access-5zx5j\") pod \"machine-api-operator-755bb95488-7l95k\" (UID: \"2f13cd6d-3c3b-4ed8-b692-cfe56a634a19\") " pod="openshift-machine-api/machine-api-operator-755bb95488-7l95k" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952537 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f19c89c8-8db7-461b-bf1f-61133b64a2da-tmp\") pod \"controller-manager-65b6cccf98-r8nwv\" (UID: \"f19c89c8-8db7-461b-bf1f-61133b64a2da\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-r8nwv" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952555 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952570 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf8b5e00-d02f-4e7f-a49c-b0304f07410b-serving-cert\") pod \"apiserver-8596bd845d-dhxx7\" (UID: \"bf8b5e00-d02f-4e7f-a49c-b0304f07410b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952575 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952603 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf4411dd-78f7-458e-b92b-e1670922138d-config\") pod \"route-controller-manager-776cdc94d6-shrhn\" (UID: \"cf4411dd-78f7-458e-b92b-e1670922138d\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952624 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952652 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4aa0e237-cb03-44d4-bf30-949ab25f2e12-console-serving-cert\") pod \"console-64d44f6ddf-xhskt\" (UID: \"4aa0e237-cb03-44d4-bf30-949ab25f2e12\") " pod="openshift-console/console-64d44f6ddf-xhskt" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952668 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bf8b5e00-d02f-4e7f-a49c-b0304f07410b-etcd-serving-ca\") pod \"apiserver-8596bd845d-dhxx7\" (UID: \"bf8b5e00-d02f-4e7f-a49c-b0304f07410b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952687 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f19c89c8-8db7-461b-bf1f-61133b64a2da-config\") pod \"controller-manager-65b6cccf98-r8nwv\" (UID: \"f19c89c8-8db7-461b-bf1f-61133b64a2da\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-r8nwv" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952702 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f19c89c8-8db7-461b-bf1f-61133b64a2da-client-ca\") pod \"controller-manager-65b6cccf98-r8nwv\" (UID: \"f19c89c8-8db7-461b-bf1f-61133b64a2da\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-r8nwv" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952699 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4aa0e237-cb03-44d4-bf30-949ab25f2e12-service-ca\") pod \"console-64d44f6ddf-xhskt\" (UID: \"4aa0e237-cb03-44d4-bf30-949ab25f2e12\") " pod="openshift-console/console-64d44f6ddf-xhskt" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952737 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-thjpm\" (UniqueName: \"kubernetes.io/projected/cf4411dd-78f7-458e-b92b-e1670922138d-kube-api-access-thjpm\") pod \"route-controller-manager-776cdc94d6-shrhn\" (UID: \"cf4411dd-78f7-458e-b92b-e1670922138d\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952744 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952759 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-audit-dir\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952781 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.952813 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-audit-policies\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.953232 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.953310 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.953333 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4aa0e237-cb03-44d4-bf30-949ab25f2e12-console-config\") pod \"console-64d44f6ddf-xhskt\" (UID: \"4aa0e237-cb03-44d4-bf30-949ab25f2e12\") " pod="openshift-console/console-64d44f6ddf-xhskt" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.953361 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d1ec3e6e-d123-47dd-bd2f-63d924f5129e-available-featuregates\") pod \"openshift-config-operator-5777786469-l5t5g\" (UID: \"d1ec3e6e-d123-47dd-bd2f-63d924f5129e\") " pod="openshift-config-operator/openshift-config-operator-5777786469-l5t5g" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.953382 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bf8b5e00-d02f-4e7f-a49c-b0304f07410b-audit-dir\") pod \"apiserver-8596bd845d-dhxx7\" (UID: \"bf8b5e00-d02f-4e7f-a49c-b0304f07410b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.953401 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f19c89c8-8db7-461b-bf1f-61133b64a2da-serving-cert\") pod \"controller-manager-65b6cccf98-r8nwv\" (UID: \"f19c89c8-8db7-461b-bf1f-61133b64a2da\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-r8nwv" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.953574 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf4411dd-78f7-458e-b92b-e1670922138d-client-ca\") pod \"route-controller-manager-776cdc94d6-shrhn\" (UID: \"cf4411dd-78f7-458e-b92b-e1670922138d\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.953858 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-audit-policies\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.953858 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-audit-dir\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.953921 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cf4411dd-78f7-458e-b92b-e1670922138d-tmp\") pod \"route-controller-manager-776cdc94d6-shrhn\" (UID: \"cf4411dd-78f7-458e-b92b-e1670922138d\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.956827 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.960435 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f13cd6d-3c3b-4ed8-b692-cfe56a634a19-config\") pod \"machine-api-operator-755bb95488-7l95k\" (UID: \"2f13cd6d-3c3b-4ed8-b692-cfe56a634a19\") " pod="openshift-machine-api/machine-api-operator-755bb95488-7l95k" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.958711 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-6jz6g"] Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.957046 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4aa0e237-cb03-44d4-bf30-949ab25f2e12-trusted-ca-bundle\") pod \"console-64d44f6ddf-xhskt\" (UID: \"4aa0e237-cb03-44d4-bf30-949ab25f2e12\") " pod="openshift-console/console-64d44f6ddf-xhskt" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.959452 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.960373 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4aa0e237-cb03-44d4-bf30-949ab25f2e12-console-config\") pod \"console-64d44f6ddf-xhskt\" (UID: \"4aa0e237-cb03-44d4-bf30-949ab25f2e12\") " pod="openshift-console/console-64d44f6ddf-xhskt" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.961414 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf4411dd-78f7-458e-b92b-e1670922138d-config\") pod \"route-controller-manager-776cdc94d6-shrhn\" (UID: \"cf4411dd-78f7-458e-b92b-e1670922138d\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.961815 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.961874 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4aa0e237-cb03-44d4-bf30-949ab25f2e12-console-oauth-config\") pod \"console-64d44f6ddf-xhskt\" (UID: \"4aa0e237-cb03-44d4-bf30-949ab25f2e12\") " pod="openshift-console/console-64d44f6ddf-xhskt" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.961906 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2f13cd6d-3c3b-4ed8-b692-cfe56a634a19-images\") pod \"machine-api-operator-755bb95488-7l95k\" (UID: \"2f13cd6d-3c3b-4ed8-b692-cfe56a634a19\") " pod="openshift-machine-api/machine-api-operator-755bb95488-7l95k" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.961967 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z74z5\" (UniqueName: \"kubernetes.io/projected/bf8b5e00-d02f-4e7f-a49c-b0304f07410b-kube-api-access-z74z5\") pod \"apiserver-8596bd845d-dhxx7\" (UID: \"bf8b5e00-d02f-4e7f-a49c-b0304f07410b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.962041 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4236f2e-adff-48cd-ad0c-f95a2871ef5b-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-mhlc4\" (UID: \"b4236f2e-adff-48cd-ad0c-f95a2871ef5b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mhlc4" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.965797 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4aa0e237-cb03-44d4-bf30-949ab25f2e12-console-serving-cert\") pod \"console-64d44f6ddf-xhskt\" (UID: \"4aa0e237-cb03-44d4-bf30-949ab25f2e12\") " pod="openshift-console/console-64d44f6ddf-xhskt" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.965955 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f19c89c8-8db7-461b-bf1f-61133b64a2da-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-r8nwv\" (UID: \"f19c89c8-8db7-461b-bf1f-61133b64a2da\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-r8nwv" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.966084 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7a08715e-e52f-4251-9b13-72f93eacb031-serviceca\") pod \"image-pruner-29521440-n967f\" (UID: \"7a08715e-e52f-4251-9b13-72f93eacb031\") " pod="openshift-image-registry/image-pruner-29521440-n967f" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.966115 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f199b766-a6b0-42f9-9fd7-a618ba099c59-serving-cert\") pod \"kube-apiserver-operator-575994946d-5jxv5\" (UID: \"f199b766-a6b0-42f9-9fd7-a618ba099c59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jxv5" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.959446 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cf4411dd-78f7-458e-b92b-e1670922138d-tmp\") pod \"route-controller-manager-776cdc94d6-shrhn\" (UID: \"cf4411dd-78f7-458e-b92b-e1670922138d\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.966772 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.967032 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.967112 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.967219 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.967372 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.968378 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.970650 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.970721 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.970775 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.970902 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.971232 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.971341 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.973453 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-6nhg5"] Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.973706 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4aa0e237-cb03-44d4-bf30-949ab25f2e12-console-oauth-config\") pod \"console-64d44f6ddf-xhskt\" (UID: \"4aa0e237-cb03-44d4-bf30-949ab25f2e12\") " pod="openshift-console/console-64d44f6ddf-xhskt" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.973616 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.974337 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.975675 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf4411dd-78f7-458e-b92b-e1670922138d-serving-cert\") pod \"route-controller-manager-776cdc94d6-shrhn\" (UID: \"cf4411dd-78f7-458e-b92b-e1670922138d\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.978105 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-qsvff"] Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.979024 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.979212 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.979364 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.979579 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.979750 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.979812 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.979858 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.980176 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6nhg5" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.974633 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.981387 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.981622 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.986929 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fp844"] Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.987361 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.992681 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.993024 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zc52\" (UniqueName: \"kubernetes.io/projected/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-kube-api-access-9zc52\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.993093 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.993164 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkfwv\" (UniqueName: \"kubernetes.io/projected/4aa0e237-cb03-44d4-bf30-949ab25f2e12-kube-api-access-dkfwv\") pod \"console-64d44f6ddf-xhskt\" (UID: \"4aa0e237-cb03-44d4-bf30-949ab25f2e12\") " pod="openshift-console/console-64d44f6ddf-xhskt" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.993240 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.993275 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.993243 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-thjpm\" (UniqueName: \"kubernetes.io/projected/cf4411dd-78f7-458e-b92b-e1670922138d-kube-api-access-thjpm\") pod \"route-controller-manager-776cdc94d6-shrhn\" (UID: \"cf4411dd-78f7-458e-b92b-e1670922138d\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.994308 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.994954 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-f4gw4"] Feb 17 00:10:44 crc kubenswrapper[5109]: I0217 00:10:44.995168 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fp844" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.004073 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-c5txm\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.004684 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vhpw4"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.005094 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-f4gw4" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.010916 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.020369 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vmcp7"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.020873 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vhpw4" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.027668 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2bqqx"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.030189 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vmcp7" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.038424 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521440-rcf5s"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.038851 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2bqqx" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.042909 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.043301 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-47trg"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.043695 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-rcf5s" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.046575 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-nz44x"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.046790 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-47trg" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.047513 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.049403 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-nxc72"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.049605 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-nz44x" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.055925 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6kxlb"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.056090 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-nxc72" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.059076 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.059106 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-cwkk6"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.059325 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6kxlb" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.062105 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-mlw2f"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.062313 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-cwkk6" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.065027 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-xhskt"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.065123 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-jdfgh"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.065201 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-mlw2f" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.067528 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dvtxl\" (UniqueName: \"kubernetes.io/projected/d1ec3e6e-d123-47dd-bd2f-63d924f5129e-kube-api-access-dvtxl\") pod \"openshift-config-operator-5777786469-l5t5g\" (UID: \"d1ec3e6e-d123-47dd-bd2f-63d924f5129e\") " pod="openshift-config-operator/openshift-config-operator-5777786469-l5t5g" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.067558 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de428bc8-27d8-4397-877f-20f8105de9d0-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-glh8p\" (UID: \"de428bc8-27d8-4397-877f-20f8105de9d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-glh8p" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.067575 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xmqnl"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.067580 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84650701-493a-45a1-abec-a28ecdba6c44-trusted-ca\") pod \"console-operator-67c89758df-rgvbj\" (UID: \"84650701-493a-45a1-abec-a28ecdba6c44\") " pod="openshift-console-operator/console-operator-67c89758df-rgvbj" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.067642 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de428bc8-27d8-4397-877f-20f8105de9d0-config\") pod \"openshift-controller-manager-operator-686468bdd5-glh8p\" (UID: \"de428bc8-27d8-4397-877f-20f8105de9d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-glh8p" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.067670 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5c9nt\" (UniqueName: \"kubernetes.io/projected/de428bc8-27d8-4397-877f-20f8105de9d0-kube-api-access-5c9nt\") pod \"openshift-controller-manager-operator-686468bdd5-glh8p\" (UID: \"de428bc8-27d8-4397-877f-20f8105de9d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-glh8p" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.067689 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2f13cd6d-3c3b-4ed8-b692-cfe56a634a19-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-7l95k\" (UID: \"2f13cd6d-3c3b-4ed8-b692-cfe56a634a19\") " pod="openshift-machine-api/machine-api-operator-755bb95488-7l95k" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.067708 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f199b766-a6b0-42f9-9fd7-a618ba099c59-tmp-dir\") pod \"kube-apiserver-operator-575994946d-5jxv5\" (UID: \"f199b766-a6b0-42f9-9fd7-a618ba099c59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jxv5" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.067730 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.067735 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/de428bc8-27d8-4397-877f-20f8105de9d0-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-glh8p\" (UID: \"de428bc8-27d8-4397-877f-20f8105de9d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-glh8p" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.067934 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f199b766-a6b0-42f9-9fd7-a618ba099c59-config\") pod \"kube-apiserver-operator-575994946d-5jxv5\" (UID: \"f199b766-a6b0-42f9-9fd7-a618ba099c59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jxv5" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.067966 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/74763348-8544-4540-85b9-d85677c7c733-machine-approver-tls\") pod \"machine-approver-54c688565-f4gw4\" (UID: \"74763348-8544-4540-85b9-d85677c7c733\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-f4gw4" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.067989 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74763348-8544-4540-85b9-d85677c7c733-config\") pod \"machine-approver-54c688565-f4gw4\" (UID: \"74763348-8544-4540-85b9-d85677c7c733\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-f4gw4" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068005 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/de428bc8-27d8-4397-877f-20f8105de9d0-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-glh8p\" (UID: \"de428bc8-27d8-4397-877f-20f8105de9d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-glh8p" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068037 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1ec3e6e-d123-47dd-bd2f-63d924f5129e-serving-cert\") pod \"openshift-config-operator-5777786469-l5t5g\" (UID: \"d1ec3e6e-d123-47dd-bd2f-63d924f5129e\") " pod="openshift-config-operator/openshift-config-operator-5777786469-l5t5g" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068055 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bf8b5e00-d02f-4e7f-a49c-b0304f07410b-audit-policies\") pod \"apiserver-8596bd845d-dhxx7\" (UID: \"bf8b5e00-d02f-4e7f-a49c-b0304f07410b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068076 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf8b5e00-d02f-4e7f-a49c-b0304f07410b-trusted-ca-bundle\") pod \"apiserver-8596bd845d-dhxx7\" (UID: \"bf8b5e00-d02f-4e7f-a49c-b0304f07410b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068094 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4236f2e-adff-48cd-ad0c-f95a2871ef5b-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-mhlc4\" (UID: \"b4236f2e-adff-48cd-ad0c-f95a2871ef5b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mhlc4" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068112 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b4236f2e-adff-48cd-ad0c-f95a2871ef5b-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-mhlc4\" (UID: \"b4236f2e-adff-48cd-ad0c-f95a2871ef5b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mhlc4" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068132 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gknkb\" (UniqueName: \"kubernetes.io/projected/abd1baa1-4b4c-459b-b487-5dd283fe0ad9-kube-api-access-gknkb\") pod \"downloads-747b44746d-5pqmv\" (UID: \"abd1baa1-4b4c-459b-b487-5dd283fe0ad9\") " pod="openshift-console/downloads-747b44746d-5pqmv" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068152 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7v7d4\" (UniqueName: \"kubernetes.io/projected/f19c89c8-8db7-461b-bf1f-61133b64a2da-kube-api-access-7v7d4\") pod \"controller-manager-65b6cccf98-r8nwv\" (UID: \"f19c89c8-8db7-461b-bf1f-61133b64a2da\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-r8nwv" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068168 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4236f2e-adff-48cd-ad0c-f95a2871ef5b-config\") pod \"kube-controller-manager-operator-69d5f845f8-mhlc4\" (UID: \"b4236f2e-adff-48cd-ad0c-f95a2871ef5b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mhlc4" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068187 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tg2wc\" (UniqueName: \"kubernetes.io/projected/7a08715e-e52f-4251-9b13-72f93eacb031-kube-api-access-tg2wc\") pod \"image-pruner-29521440-n967f\" (UID: \"7a08715e-e52f-4251-9b13-72f93eacb031\") " pod="openshift-image-registry/image-pruner-29521440-n967f" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068206 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5zx5j\" (UniqueName: \"kubernetes.io/projected/2f13cd6d-3c3b-4ed8-b692-cfe56a634a19-kube-api-access-5zx5j\") pod \"machine-api-operator-755bb95488-7l95k\" (UID: \"2f13cd6d-3c3b-4ed8-b692-cfe56a634a19\") " pod="openshift-machine-api/machine-api-operator-755bb95488-7l95k" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068228 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f19c89c8-8db7-461b-bf1f-61133b64a2da-tmp\") pod \"controller-manager-65b6cccf98-r8nwv\" (UID: \"f19c89c8-8db7-461b-bf1f-61133b64a2da\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-r8nwv" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068246 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf8b5e00-d02f-4e7f-a49c-b0304f07410b-serving-cert\") pod \"apiserver-8596bd845d-dhxx7\" (UID: \"bf8b5e00-d02f-4e7f-a49c-b0304f07410b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068264 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bf8b5e00-d02f-4e7f-a49c-b0304f07410b-etcd-serving-ca\") pod \"apiserver-8596bd845d-dhxx7\" (UID: \"bf8b5e00-d02f-4e7f-a49c-b0304f07410b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068282 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f19c89c8-8db7-461b-bf1f-61133b64a2da-config\") pod \"controller-manager-65b6cccf98-r8nwv\" (UID: \"f19c89c8-8db7-461b-bf1f-61133b64a2da\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-r8nwv" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068298 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f19c89c8-8db7-461b-bf1f-61133b64a2da-client-ca\") pod \"controller-manager-65b6cccf98-r8nwv\" (UID: \"f19c89c8-8db7-461b-bf1f-61133b64a2da\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-r8nwv" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068332 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84650701-493a-45a1-abec-a28ecdba6c44-serving-cert\") pod \"console-operator-67c89758df-rgvbj\" (UID: \"84650701-493a-45a1-abec-a28ecdba6c44\") " pod="openshift-console-operator/console-operator-67c89758df-rgvbj" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068357 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d1ec3e6e-d123-47dd-bd2f-63d924f5129e-available-featuregates\") pod \"openshift-config-operator-5777786469-l5t5g\" (UID: \"d1ec3e6e-d123-47dd-bd2f-63d924f5129e\") " pod="openshift-config-operator/openshift-config-operator-5777786469-l5t5g" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068372 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bf8b5e00-d02f-4e7f-a49c-b0304f07410b-audit-dir\") pod \"apiserver-8596bd845d-dhxx7\" (UID: \"bf8b5e00-d02f-4e7f-a49c-b0304f07410b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068391 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f19c89c8-8db7-461b-bf1f-61133b64a2da-serving-cert\") pod \"controller-manager-65b6cccf98-r8nwv\" (UID: \"f19c89c8-8db7-461b-bf1f-61133b64a2da\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-r8nwv" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068414 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84650701-493a-45a1-abec-a28ecdba6c44-config\") pod \"console-operator-67c89758df-rgvbj\" (UID: \"84650701-493a-45a1-abec-a28ecdba6c44\") " pod="openshift-console-operator/console-operator-67c89758df-rgvbj" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068430 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsfpw\" (UniqueName: \"kubernetes.io/projected/84650701-493a-45a1-abec-a28ecdba6c44-kube-api-access-qsfpw\") pod \"console-operator-67c89758df-rgvbj\" (UID: \"84650701-493a-45a1-abec-a28ecdba6c44\") " pod="openshift-console-operator/console-operator-67c89758df-rgvbj" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068451 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f13cd6d-3c3b-4ed8-b692-cfe56a634a19-config\") pod \"machine-api-operator-755bb95488-7l95k\" (UID: \"2f13cd6d-3c3b-4ed8-b692-cfe56a634a19\") " pod="openshift-machine-api/machine-api-operator-755bb95488-7l95k" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068469 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/74763348-8544-4540-85b9-d85677c7c733-auth-proxy-config\") pod \"machine-approver-54c688565-f4gw4\" (UID: \"74763348-8544-4540-85b9-d85677c7c733\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-f4gw4" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068487 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2f13cd6d-3c3b-4ed8-b692-cfe56a634a19-images\") pod \"machine-api-operator-755bb95488-7l95k\" (UID: \"2f13cd6d-3c3b-4ed8-b692-cfe56a634a19\") " pod="openshift-machine-api/machine-api-operator-755bb95488-7l95k" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068504 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z74z5\" (UniqueName: \"kubernetes.io/projected/bf8b5e00-d02f-4e7f-a49c-b0304f07410b-kube-api-access-z74z5\") pod \"apiserver-8596bd845d-dhxx7\" (UID: \"bf8b5e00-d02f-4e7f-a49c-b0304f07410b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068520 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4236f2e-adff-48cd-ad0c-f95a2871ef5b-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-mhlc4\" (UID: \"b4236f2e-adff-48cd-ad0c-f95a2871ef5b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mhlc4" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068539 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5chvz\" (UniqueName: \"kubernetes.io/projected/74763348-8544-4540-85b9-d85677c7c733-kube-api-access-5chvz\") pod \"machine-approver-54c688565-f4gw4\" (UID: \"74763348-8544-4540-85b9-d85677c7c733\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-f4gw4" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068569 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f19c89c8-8db7-461b-bf1f-61133b64a2da-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-r8nwv\" (UID: \"f19c89c8-8db7-461b-bf1f-61133b64a2da\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-r8nwv" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068605 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7a08715e-e52f-4251-9b13-72f93eacb031-serviceca\") pod \"image-pruner-29521440-n967f\" (UID: \"7a08715e-e52f-4251-9b13-72f93eacb031\") " pod="openshift-image-registry/image-pruner-29521440-n967f" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068623 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f199b766-a6b0-42f9-9fd7-a618ba099c59-serving-cert\") pod \"kube-apiserver-operator-575994946d-5jxv5\" (UID: \"f199b766-a6b0-42f9-9fd7-a618ba099c59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jxv5" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068632 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f199b766-a6b0-42f9-9fd7-a618ba099c59-config\") pod \"kube-apiserver-operator-575994946d-5jxv5\" (UID: \"f199b766-a6b0-42f9-9fd7-a618ba099c59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jxv5" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.068647 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bf8b5e00-d02f-4e7f-a49c-b0304f07410b-etcd-client\") pod \"apiserver-8596bd845d-dhxx7\" (UID: \"bf8b5e00-d02f-4e7f-a49c-b0304f07410b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.070693 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bf8b5e00-d02f-4e7f-a49c-b0304f07410b-encryption-config\") pod \"apiserver-8596bd845d-dhxx7\" (UID: \"bf8b5e00-d02f-4e7f-a49c-b0304f07410b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.070733 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f199b766-a6b0-42f9-9fd7-a618ba099c59-kube-api-access\") pod \"kube-apiserver-operator-575994946d-5jxv5\" (UID: \"f199b766-a6b0-42f9-9fd7-a618ba099c59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jxv5" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.071263 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7a08715e-e52f-4251-9b13-72f93eacb031-serviceca\") pod \"image-pruner-29521440-n967f\" (UID: \"7a08715e-e52f-4251-9b13-72f93eacb031\") " pod="openshift-image-registry/image-pruner-29521440-n967f" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.071821 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f19c89c8-8db7-461b-bf1f-61133b64a2da-tmp\") pod \"controller-manager-65b6cccf98-r8nwv\" (UID: \"f19c89c8-8db7-461b-bf1f-61133b64a2da\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-r8nwv" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.071860 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f13cd6d-3c3b-4ed8-b692-cfe56a634a19-config\") pod \"machine-api-operator-755bb95488-7l95k\" (UID: \"2f13cd6d-3c3b-4ed8-b692-cfe56a634a19\") " pod="openshift-machine-api/machine-api-operator-755bb95488-7l95k" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.072621 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f19c89c8-8db7-461b-bf1f-61133b64a2da-client-ca\") pod \"controller-manager-65b6cccf98-r8nwv\" (UID: \"f19c89c8-8db7-461b-bf1f-61133b64a2da\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-r8nwv" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.072700 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2f13cd6d-3c3b-4ed8-b692-cfe56a634a19-images\") pod \"machine-api-operator-755bb95488-7l95k\" (UID: \"2f13cd6d-3c3b-4ed8-b692-cfe56a634a19\") " pod="openshift-machine-api/machine-api-operator-755bb95488-7l95k" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.073070 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d1ec3e6e-d123-47dd-bd2f-63d924f5129e-available-featuregates\") pod \"openshift-config-operator-5777786469-l5t5g\" (UID: \"d1ec3e6e-d123-47dd-bd2f-63d924f5129e\") " pod="openshift-config-operator/openshift-config-operator-5777786469-l5t5g" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.073229 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bf8b5e00-d02f-4e7f-a49c-b0304f07410b-audit-dir\") pod \"apiserver-8596bd845d-dhxx7\" (UID: \"bf8b5e00-d02f-4e7f-a49c-b0304f07410b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.073315 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bf8b5e00-d02f-4e7f-a49c-b0304f07410b-etcd-serving-ca\") pod \"apiserver-8596bd845d-dhxx7\" (UID: \"bf8b5e00-d02f-4e7f-a49c-b0304f07410b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.073912 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f19c89c8-8db7-461b-bf1f-61133b64a2da-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-r8nwv\" (UID: \"f19c89c8-8db7-461b-bf1f-61133b64a2da\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-r8nwv" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.074314 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f199b766-a6b0-42f9-9fd7-a618ba099c59-tmp-dir\") pod \"kube-apiserver-operator-575994946d-5jxv5\" (UID: \"f199b766-a6b0-42f9-9fd7-a618ba099c59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jxv5" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.074285 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bf8b5e00-d02f-4e7f-a49c-b0304f07410b-audit-policies\") pod \"apiserver-8596bd845d-dhxx7\" (UID: \"bf8b5e00-d02f-4e7f-a49c-b0304f07410b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.083576 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1ec3e6e-d123-47dd-bd2f-63d924f5129e-serving-cert\") pod \"openshift-config-operator-5777786469-l5t5g\" (UID: \"d1ec3e6e-d123-47dd-bd2f-63d924f5129e\") " pod="openshift-config-operator/openshift-config-operator-5777786469-l5t5g" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.074847 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de428bc8-27d8-4397-877f-20f8105de9d0-config\") pod \"openshift-controller-manager-operator-686468bdd5-glh8p\" (UID: \"de428bc8-27d8-4397-877f-20f8105de9d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-glh8p" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.075311 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b4236f2e-adff-48cd-ad0c-f95a2871ef5b-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-mhlc4\" (UID: \"b4236f2e-adff-48cd-ad0c-f95a2871ef5b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mhlc4" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.075473 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4236f2e-adff-48cd-ad0c-f95a2871ef5b-config\") pod \"kube-controller-manager-operator-69d5f845f8-mhlc4\" (UID: \"b4236f2e-adff-48cd-ad0c-f95a2871ef5b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mhlc4" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.075551 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4236f2e-adff-48cd-ad0c-f95a2871ef5b-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-mhlc4\" (UID: \"b4236f2e-adff-48cd-ad0c-f95a2871ef5b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mhlc4" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.077385 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f19c89c8-8db7-461b-bf1f-61133b64a2da-config\") pod \"controller-manager-65b6cccf98-r8nwv\" (UID: \"f19c89c8-8db7-461b-bf1f-61133b64a2da\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-r8nwv" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.077442 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-tw52v"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.084204 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f19c89c8-8db7-461b-bf1f-61133b64a2da-serving-cert\") pod \"controller-manager-65b6cccf98-r8nwv\" (UID: \"f19c89c8-8db7-461b-bf1f-61133b64a2da\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-r8nwv" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.079026 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xmqnl" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.079140 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f199b766-a6b0-42f9-9fd7-a618ba099c59-serving-cert\") pod \"kube-apiserver-operator-575994946d-5jxv5\" (UID: \"f199b766-a6b0-42f9-9fd7-a618ba099c59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jxv5" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.080317 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bf8b5e00-d02f-4e7f-a49c-b0304f07410b-encryption-config\") pod \"apiserver-8596bd845d-dhxx7\" (UID: \"bf8b5e00-d02f-4e7f-a49c-b0304f07410b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.080664 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2f13cd6d-3c3b-4ed8-b692-cfe56a634a19-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-7l95k\" (UID: \"2f13cd6d-3c3b-4ed8-b692-cfe56a634a19\") " pod="openshift-machine-api/machine-api-operator-755bb95488-7l95k" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.081059 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf8b5e00-d02f-4e7f-a49c-b0304f07410b-serving-cert\") pod \"apiserver-8596bd845d-dhxx7\" (UID: \"bf8b5e00-d02f-4e7f-a49c-b0304f07410b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.074700 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf8b5e00-d02f-4e7f-a49c-b0304f07410b-trusted-ca-bundle\") pod \"apiserver-8596bd845d-dhxx7\" (UID: \"bf8b5e00-d02f-4e7f-a49c-b0304f07410b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.077565 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-jdfgh" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.085097 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de428bc8-27d8-4397-877f-20f8105de9d0-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-glh8p\" (UID: \"de428bc8-27d8-4397-877f-20f8105de9d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-glh8p" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.085972 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bf8b5e00-d02f-4e7f-a49c-b0304f07410b-etcd-client\") pod \"apiserver-8596bd845d-dhxx7\" (UID: \"bf8b5e00-d02f-4e7f-a49c-b0304f07410b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.087102 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.098179 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-4xcq8"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.098404 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-tw52v" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.101184 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.105872 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9mrrp"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.106024 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4xcq8" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.107424 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.109197 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-8f27m"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.109355 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9mrrp" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.113972 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-xhskt" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.114618 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2tl49"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.114776 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-8f27m" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.118448 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4rxx6"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.118677 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2tl49" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.122378 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.129043 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.132867 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-5mdds"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.133334 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4rxx6" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.141109 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-glh8p"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.141146 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-r8nwv"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.141157 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-c5txm"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.141169 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-5pqmv"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.141211 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mhlc4"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.141224 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-7l95k"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.141232 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-dqtqd"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.141242 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-cbqfq"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.141585 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5mdds" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.144064 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-6nhg5"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.144090 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5mdds"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.144101 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-cwkk6"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.144110 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jxv5"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.144119 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-rgvbj"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.144129 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-nz44x"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.144137 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vmcp7"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.144145 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29521440-n967f"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.144153 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-l5t5g"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.144160 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-jdfgh"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.144169 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-qsvff"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.144177 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-8dzhp"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.144188 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-6jz6g"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.144196 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-47trg"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.144204 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-nxc72"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.144212 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xmqnl"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.144220 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fp844"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.144229 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521440-rcf5s"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.144237 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6kxlb"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.144244 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2bqqx"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.144253 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-g5z9b"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.146503 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4rxx6"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.146525 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-546f6"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.146790 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-cbqfq" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.146816 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-g5z9b" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.147258 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.150900 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vhpw4"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.150926 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.150935 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-mlw2f"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.150947 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-g5z9b"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.150956 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-4xcq8"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.150963 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2tl49"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.150971 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-546f6"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.150979 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9mrrp"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.150990 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-8f27m"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.151003 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-tw52v"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.151016 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-mrr4k"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.151366 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-546f6" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.158998 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-mrr4k" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.167763 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.171664 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84650701-493a-45a1-abec-a28ecdba6c44-trusted-ca\") pod \"console-operator-67c89758df-rgvbj\" (UID: \"84650701-493a-45a1-abec-a28ecdba6c44\") " pod="openshift-console-operator/console-operator-67c89758df-rgvbj" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.171750 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/74763348-8544-4540-85b9-d85677c7c733-machine-approver-tls\") pod \"machine-approver-54c688565-f4gw4\" (UID: \"74763348-8544-4540-85b9-d85677c7c733\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-f4gw4" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.171778 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74763348-8544-4540-85b9-d85677c7c733-config\") pod \"machine-approver-54c688565-f4gw4\" (UID: \"74763348-8544-4540-85b9-d85677c7c733\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-f4gw4" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.171850 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84650701-493a-45a1-abec-a28ecdba6c44-serving-cert\") pod \"console-operator-67c89758df-rgvbj\" (UID: \"84650701-493a-45a1-abec-a28ecdba6c44\") " pod="openshift-console-operator/console-operator-67c89758df-rgvbj" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.171892 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84650701-493a-45a1-abec-a28ecdba6c44-config\") pod \"console-operator-67c89758df-rgvbj\" (UID: \"84650701-493a-45a1-abec-a28ecdba6c44\") " pod="openshift-console-operator/console-operator-67c89758df-rgvbj" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.171914 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qsfpw\" (UniqueName: \"kubernetes.io/projected/84650701-493a-45a1-abec-a28ecdba6c44-kube-api-access-qsfpw\") pod \"console-operator-67c89758df-rgvbj\" (UID: \"84650701-493a-45a1-abec-a28ecdba6c44\") " pod="openshift-console-operator/console-operator-67c89758df-rgvbj" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.171941 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/74763348-8544-4540-85b9-d85677c7c733-auth-proxy-config\") pod \"machine-approver-54c688565-f4gw4\" (UID: \"74763348-8544-4540-85b9-d85677c7c733\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-f4gw4" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.171973 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5chvz\" (UniqueName: \"kubernetes.io/projected/74763348-8544-4540-85b9-d85677c7c733-kube-api-access-5chvz\") pod \"machine-approver-54c688565-f4gw4\" (UID: \"74763348-8544-4540-85b9-d85677c7c733\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-f4gw4" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.173093 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84650701-493a-45a1-abec-a28ecdba6c44-config\") pod \"console-operator-67c89758df-rgvbj\" (UID: \"84650701-493a-45a1-abec-a28ecdba6c44\") " pod="openshift-console-operator/console-operator-67c89758df-rgvbj" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.173222 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84650701-493a-45a1-abec-a28ecdba6c44-trusted-ca\") pod \"console-operator-67c89758df-rgvbj\" (UID: \"84650701-493a-45a1-abec-a28ecdba6c44\") " pod="openshift-console-operator/console-operator-67c89758df-rgvbj" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.179736 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84650701-493a-45a1-abec-a28ecdba6c44-serving-cert\") pod \"console-operator-67c89758df-rgvbj\" (UID: \"84650701-493a-45a1-abec-a28ecdba6c44\") " pod="openshift-console-operator/console-operator-67c89758df-rgvbj" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.190919 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.207929 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.229578 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.247080 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.282503 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.307931 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.310235 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-c5txm"] Feb 17 00:10:45 crc kubenswrapper[5109]: W0217 00:10:45.321018 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6badb98_48f9_46ff_9aca_7e1cecfb0ef2.slice/crio-6c00c8d264bfd7e4f470076c165cf8a0a1ab731ffbaeb5abcbabeea6a4b6d17a WatchSource:0}: Error finding container 6c00c8d264bfd7e4f470076c165cf8a0a1ab731ffbaeb5abcbabeea6a4b6d17a: Status 404 returned error can't find the container with id 6c00c8d264bfd7e4f470076c165cf8a0a1ab731ffbaeb5abcbabeea6a4b6d17a Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.326395 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-xhskt"] Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.327406 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.347060 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.352711 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn"] Feb 17 00:10:45 crc kubenswrapper[5109]: W0217 00:10:45.359436 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf4411dd_78f7_458e_b92b_e1670922138d.slice/crio-4d61dc404432c1e8946db81896045998265f4c86227604ea33706d4137e5abad WatchSource:0}: Error finding container 4d61dc404432c1e8946db81896045998265f4c86227604ea33706d4137e5abad: Status 404 returned error can't find the container with id 4d61dc404432c1e8946db81896045998265f4c86227604ea33706d4137e5abad Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.367966 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.387945 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.398907 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/74763348-8544-4540-85b9-d85677c7c733-machine-approver-tls\") pod \"machine-approver-54c688565-f4gw4\" (UID: \"74763348-8544-4540-85b9-d85677c7c733\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-f4gw4" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.407952 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.427807 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.433073 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/74763348-8544-4540-85b9-d85677c7c733-auth-proxy-config\") pod \"machine-approver-54c688565-f4gw4\" (UID: \"74763348-8544-4540-85b9-d85677c7c733\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-f4gw4" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.446878 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.453401 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74763348-8544-4540-85b9-d85677c7c733-config\") pod \"machine-approver-54c688565-f4gw4\" (UID: \"74763348-8544-4540-85b9-d85677c7c733\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-f4gw4" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.467646 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.488161 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.527161 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.547484 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.566539 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.587646 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.607838 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.627067 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.647305 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.668565 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.687404 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.708040 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.739773 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.749580 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.768127 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.787314 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.807827 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.829400 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.848786 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.867834 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.887392 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.908231 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.928879 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.948860 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.971350 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Feb 17 00:10:45 crc kubenswrapper[5109]: I0217 00:10:45.988057 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.007312 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.027740 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.048283 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" event={"ID":"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2","Type":"ContainerStarted","Data":"854ea946ed69699e59c5630202e48752d2e227d4c574ba5c9f267fcf2028141a"} Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.048339 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" event={"ID":"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2","Type":"ContainerStarted","Data":"6c00c8d264bfd7e4f470076c165cf8a0a1ab731ffbaeb5abcbabeea6a4b6d17a"} Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.049033 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.049381 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.050883 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn" event={"ID":"cf4411dd-78f7-458e-b92b-e1670922138d","Type":"ContainerStarted","Data":"3611fea1f040b89ed0cf69af4ec7d3876bb5b3a914c9fb1458a5b9e6901f615d"} Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.050944 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn" event={"ID":"cf4411dd-78f7-458e-b92b-e1670922138d","Type":"ContainerStarted","Data":"4d61dc404432c1e8946db81896045998265f4c86227604ea33706d4137e5abad"} Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.052172 5109 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-c5txm container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.6:6443/healthz\": dial tcp 10.217.0.6:6443: connect: connection refused" start-of-body= Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.052222 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" podUID="f6badb98-48f9-46ff-9aca-7e1cecfb0ef2" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.6:6443/healthz\": dial tcp 10.217.0.6:6443: connect: connection refused" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.052508 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.056504 5109 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-shrhn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.056585 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn" podUID="cf4411dd-78f7-458e-b92b-e1670922138d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.057115 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-xhskt" event={"ID":"4aa0e237-cb03-44d4-bf30-949ab25f2e12","Type":"ContainerStarted","Data":"f813324f4020298a2c32888666275a03a03a84c1a0be084322f77ff38ad3a69c"} Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.057184 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-xhskt" event={"ID":"4aa0e237-cb03-44d4-bf30-949ab25f2e12","Type":"ContainerStarted","Data":"ced8a5b76d58dc664223532a7985153c890987ce640039798c520b8a31f8950f"} Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.066244 5109 request.go:752] "Waited before sending request" delay="1.009870538s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-serving-cert&limit=500&resourceVersion=0" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.068386 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.088233 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.108096 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.128456 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.147413 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.168158 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.188095 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.208242 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.227375 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.248351 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.267840 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.284529 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:46 crc kubenswrapper[5109]: E0217 00:10:46.284790 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:11:02.284760746 +0000 UTC m=+133.616315534 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.284916 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.284968 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.285011 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.285092 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 17 00:10:46 crc kubenswrapper[5109]: E0217 00:10:46.285109 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 00:10:46 crc kubenswrapper[5109]: E0217 00:10:46.285120 5109 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 00:10:46 crc kubenswrapper[5109]: E0217 00:10:46.285138 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 00:10:46 crc kubenswrapper[5109]: E0217 00:10:46.285231 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-17 00:11:02.285209998 +0000 UTC m=+133.616764746 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 00:10:46 crc kubenswrapper[5109]: E0217 00:10:46.285235 5109 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 00:10:46 crc kubenswrapper[5109]: E0217 00:10:46.285238 5109 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:10:46 crc kubenswrapper[5109]: E0217 00:10:46.285251 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 00:10:46 crc kubenswrapper[5109]: E0217 00:10:46.285272 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 00:10:46 crc kubenswrapper[5109]: E0217 00:10:46.285300 5109 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:10:46 crc kubenswrapper[5109]: E0217 00:10:46.285287 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-17 00:11:02.285276809 +0000 UTC m=+133.616831597 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 00:10:46 crc kubenswrapper[5109]: E0217 00:10:46.285358 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-17 00:11:02.285345561 +0000 UTC m=+133.616900319 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:10:46 crc kubenswrapper[5109]: E0217 00:10:46.285371 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-17 00:11:02.285364432 +0000 UTC m=+133.616919270 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.288532 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.307075 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.346311 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f199b766-a6b0-42f9-9fd7-a618ba099c59-kube-api-access\") pod \"kube-apiserver-operator-575994946d-5jxv5\" (UID: \"f199b766-a6b0-42f9-9fd7-a618ba099c59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jxv5" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.378120 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z74z5\" (UniqueName: \"kubernetes.io/projected/bf8b5e00-d02f-4e7f-a49c-b0304f07410b-kube-api-access-z74z5\") pod \"apiserver-8596bd845d-dhxx7\" (UID: \"bf8b5e00-d02f-4e7f-a49c-b0304f07410b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.382740 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gknkb\" (UniqueName: \"kubernetes.io/projected/abd1baa1-4b4c-459b-b487-5dd283fe0ad9-kube-api-access-gknkb\") pod \"downloads-747b44746d-5pqmv\" (UID: \"abd1baa1-4b4c-459b-b487-5dd283fe0ad9\") " pod="openshift-console/downloads-747b44746d-5pqmv" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.388237 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d9259cd-7490-4a4f-b09c-db6d25fadf0e-metrics-certs\") pod \"network-metrics-daemon-t9gkm\" (UID: \"1d9259cd-7490-4a4f-b09c-db6d25fadf0e\") " pod="openshift-multus/network-metrics-daemon-t9gkm" Feb 17 00:10:46 crc kubenswrapper[5109]: E0217 00:10:46.388398 5109 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 00:10:46 crc kubenswrapper[5109]: E0217 00:10:46.388486 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d9259cd-7490-4a4f-b09c-db6d25fadf0e-metrics-certs podName:1d9259cd-7490-4a4f-b09c-db6d25fadf0e nodeName:}" failed. No retries permitted until 2026-02-17 00:11:02.38846722 +0000 UTC m=+133.720022038 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d9259cd-7490-4a4f-b09c-db6d25fadf0e-metrics-certs") pod "network-metrics-daemon-t9gkm" (UID: "1d9259cd-7490-4a4f-b09c-db6d25fadf0e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.408571 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg2wc\" (UniqueName: \"kubernetes.io/projected/7a08715e-e52f-4251-9b13-72f93eacb031-kube-api-access-tg2wc\") pod \"image-pruner-29521440-n967f\" (UID: \"7a08715e-e52f-4251-9b13-72f93eacb031\") " pod="openshift-image-registry/image-pruner-29521440-n967f" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.422461 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c9nt\" (UniqueName: \"kubernetes.io/projected/de428bc8-27d8-4397-877f-20f8105de9d0-kube-api-access-5c9nt\") pod \"openshift-controller-manager-operator-686468bdd5-glh8p\" (UID: \"de428bc8-27d8-4397-877f-20f8105de9d0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-glh8p" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.449564 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvtxl\" (UniqueName: \"kubernetes.io/projected/d1ec3e6e-d123-47dd-bd2f-63d924f5129e-kube-api-access-dvtxl\") pod \"openshift-config-operator-5777786469-l5t5g\" (UID: \"d1ec3e6e-d123-47dd-bd2f-63d924f5129e\") " pod="openshift-config-operator/openshift-config-operator-5777786469-l5t5g" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.451719 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-glh8p" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.463559 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.463584 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.463559 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.463582 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t9gkm" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.468460 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v7d4\" (UniqueName: \"kubernetes.io/projected/f19c89c8-8db7-461b-bf1f-61133b64a2da-kube-api-access-7v7d4\") pod \"controller-manager-65b6cccf98-r8nwv\" (UID: \"f19c89c8-8db7-461b-bf1f-61133b64a2da\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-r8nwv" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.485470 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4236f2e-adff-48cd-ad0c-f95a2871ef5b-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-mhlc4\" (UID: \"b4236f2e-adff-48cd-ad0c-f95a2871ef5b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mhlc4" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.505929 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29521440-n967f" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.506451 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zx5j\" (UniqueName: \"kubernetes.io/projected/2f13cd6d-3c3b-4ed8-b692-cfe56a634a19-kube-api-access-5zx5j\") pod \"machine-api-operator-755bb95488-7l95k\" (UID: \"2f13cd6d-3c3b-4ed8-b692-cfe56a634a19\") " pod="openshift-machine-api/machine-api-operator-755bb95488-7l95k" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.507912 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.518432 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-7l95k" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.526603 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-5pqmv" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.526927 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.544055 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.547980 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.551069 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mhlc4" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.557121 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-l5t5g" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.568491 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.577888 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jxv5" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.592656 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.612150 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.630288 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.648890 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.669234 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.689895 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.694237 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-r8nwv" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.709029 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.710779 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-glh8p"] Feb 17 00:10:46 crc kubenswrapper[5109]: W0217 00:10:46.724085 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde428bc8_27d8_4397_877f_20f8105de9d0.slice/crio-fc583e7c0e5a71f50cb79661877b89faca688c820d3095c4719e913cd5a87c20 WatchSource:0}: Error finding container fc583e7c0e5a71f50cb79661877b89faca688c820d3095c4719e913cd5a87c20: Status 404 returned error can't find the container with id fc583e7c0e5a71f50cb79661877b89faca688c820d3095c4719e913cd5a87c20 Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.728180 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.746734 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.753357 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-5pqmv"] Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.766899 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.787313 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Feb 17 00:10:46 crc kubenswrapper[5109]: W0217 00:10:46.805254 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabd1baa1_4b4c_459b_b487_5dd283fe0ad9.slice/crio-3522d716b6f76a084ad4d6e475a5f2bda53b4ffd5e0e00efc6e590e8745c698e WatchSource:0}: Error finding container 3522d716b6f76a084ad4d6e475a5f2bda53b4ffd5e0e00efc6e590e8745c698e: Status 404 returned error can't find the container with id 3522d716b6f76a084ad4d6e475a5f2bda53b4ffd5e0e00efc6e590e8745c698e Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.812623 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.835565 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.848173 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.867206 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.888577 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.908883 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.919351 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jxv5"] Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.925456 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-r8nwv"] Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.926928 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Feb 17 00:10:46 crc kubenswrapper[5109]: W0217 00:10:46.944508 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf199b766_a6b0_42f9_9fd7_a618ba099c59.slice/crio-bc97aabaf13abf35d386a61f61bb1cd0e6e32e17ea3df9b577968d18ff0bc6a5 WatchSource:0}: Error finding container bc97aabaf13abf35d386a61f61bb1cd0e6e32e17ea3df9b577968d18ff0bc6a5: Status 404 returned error can't find the container with id bc97aabaf13abf35d386a61f61bb1cd0e6e32e17ea3df9b577968d18ff0bc6a5 Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.946927 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Feb 17 00:10:46 crc kubenswrapper[5109]: W0217 00:10:46.947713 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf19c89c8_8db7_461b_bf1f_61133b64a2da.slice/crio-bf7dc40e15b2b66840f06f571e80c4a0310e6ab5a37b7f5f4d56a64589e29558 WatchSource:0}: Error finding container bf7dc40e15b2b66840f06f571e80c4a0310e6ab5a37b7f5f4d56a64589e29558: Status 404 returned error can't find the container with id bf7dc40e15b2b66840f06f571e80c4a0310e6ab5a37b7f5f4d56a64589e29558 Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.967134 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Feb 17 00:10:46 crc kubenswrapper[5109]: I0217 00:10:46.988982 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.008271 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.027657 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.048990 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.059536 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-l5t5g"] Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.066721 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29521440-n967f"] Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.068002 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-7l95k"] Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.068050 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.079072 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7"] Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.089417 5109 request.go:752] "Waited before sending request" delay="1.942270821s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Dcanary-serving-cert&limit=500&resourceVersion=0" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.093585 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.093903 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-r8nwv" event={"ID":"f19c89c8-8db7-461b-bf1f-61133b64a2da","Type":"ContainerStarted","Data":"bf7dc40e15b2b66840f06f571e80c4a0310e6ab5a37b7f5f4d56a64589e29558"} Feb 17 00:10:47 crc kubenswrapper[5109]: W0217 00:10:47.096154 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a08715e_e52f_4251_9b13_72f93eacb031.slice/crio-63cef07e628c2987560bd0d73c3d34f12066effddf1ca9d0fc458f21b8b28fda WatchSource:0}: Error finding container 63cef07e628c2987560bd0d73c3d34f12066effddf1ca9d0fc458f21b8b28fda: Status 404 returned error can't find the container with id 63cef07e628c2987560bd0d73c3d34f12066effddf1ca9d0fc458f21b8b28fda Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.096755 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-5pqmv" event={"ID":"abd1baa1-4b4c-459b-b487-5dd283fe0ad9","Type":"ContainerStarted","Data":"3522d716b6f76a084ad4d6e475a5f2bda53b4ffd5e0e00efc6e590e8745c698e"} Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.102652 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mhlc4"] Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.108912 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.112470 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-glh8p" event={"ID":"de428bc8-27d8-4397-877f-20f8105de9d0","Type":"ContainerStarted","Data":"773a692523693824466d972a1879173f06c8adbfee7e5ca0e97a26cd15cea2d8"} Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.112519 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-glh8p" event={"ID":"de428bc8-27d8-4397-877f-20f8105de9d0","Type":"ContainerStarted","Data":"fc583e7c0e5a71f50cb79661877b89faca688c820d3095c4719e913cd5a87c20"} Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.121835 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jxv5" event={"ID":"f199b766-a6b0-42f9-9fd7-a618ba099c59","Type":"ContainerStarted","Data":"bc97aabaf13abf35d386a61f61bb1cd0e6e32e17ea3df9b577968d18ff0bc6a5"} Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.127151 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.133223 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.147871 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.167621 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.187434 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.207464 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.227832 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.247950 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.283445 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5chvz\" (UniqueName: \"kubernetes.io/projected/74763348-8544-4540-85b9-d85677c7c733-kube-api-access-5chvz\") pod \"machine-approver-54c688565-f4gw4\" (UID: \"74763348-8544-4540-85b9-d85677c7c733\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-f4gw4" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.303236 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsfpw\" (UniqueName: \"kubernetes.io/projected/84650701-493a-45a1-abec-a28ecdba6c44-kube-api-access-qsfpw\") pod \"console-operator-67c89758df-rgvbj\" (UID: \"84650701-493a-45a1-abec-a28ecdba6c44\") " pod="openshift-console-operator/console-operator-67c89758df-rgvbj" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.348224 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.370535 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.388339 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.407036 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.407559 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.407615 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9c5b02a2-437a-46c3-b4ce-d856b61053f6-registry-certificates\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.407641 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4295a9b8-bd4b-4d7c-8499-1c407ff83e5f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-8dzhp\" (UID: \"4295a9b8-bd4b-4d7c-8499-1c407ff83e5f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-8dzhp" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.407699 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4d85031-8c4b-4260-9279-77b7e3a7d75d-metrics-certs\") pod \"router-default-68cf44c8b8-rw5p4\" (UID: \"a4d85031-8c4b-4260-9279-77b7e3a7d75d\") " pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.407732 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fcfae8bf-91d7-48d3-a978-1510fe282c92-etcd-client\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.407774 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6fd08da-3534-483e-a717-cad005275c5a-config\") pod \"authentication-operator-7f5c659b84-6nhg5\" (UID: \"a6fd08da-3534-483e-a717-cad005275c5a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6nhg5" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.407937 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4e260ed-b1aa-426b-b93a-0b15c08ca7ed-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-fp844\" (UID: \"b4e260ed-b1aa-426b-b93a-0b15c08ca7ed\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fp844" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.407958 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4e260ed-b1aa-426b-b93a-0b15c08ca7ed-config\") pod \"openshift-kube-scheduler-operator-54f497555d-fp844\" (UID: \"b4e260ed-b1aa-426b-b93a-0b15c08ca7ed\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fp844" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.407975 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9c5b02a2-437a-46c3-b4ce-d856b61053f6-ca-trust-extracted\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.408010 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c5b02a2-437a-46c3-b4ce-d856b61053f6-trusted-ca\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.408025 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4295a9b8-bd4b-4d7c-8499-1c407ff83e5f-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-8dzhp\" (UID: \"4295a9b8-bd4b-4d7c-8499-1c407ff83e5f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-8dzhp" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.408041 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fcfae8bf-91d7-48d3-a978-1510fe282c92-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.408055 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcfae8bf-91d7-48d3-a978-1510fe282c92-serving-cert\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.408089 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6335485b-ac6f-4574-9590-d11aee2f8cf5-metrics-tls\") pod \"dns-operator-799b87ffcd-dqtqd\" (UID: \"6335485b-ac6f-4574-9590-d11aee2f8cf5\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-dqtqd" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.408103 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94tsc\" (UniqueName: \"kubernetes.io/projected/a4d85031-8c4b-4260-9279-77b7e3a7d75d-kube-api-access-94tsc\") pod \"router-default-68cf44c8b8-rw5p4\" (UID: \"a4d85031-8c4b-4260-9279-77b7e3a7d75d\") " pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.408120 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fcfae8bf-91d7-48d3-a978-1510fe282c92-encryption-config\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.408137 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjcdm\" (UniqueName: \"kubernetes.io/projected/9c5b02a2-437a-46c3-b4ce-d856b61053f6-kube-api-access-hjcdm\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.408175 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcfae8bf-91d7-48d3-a978-1510fe282c92-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.408367 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn2b8\" (UniqueName: \"kubernetes.io/projected/9136f9dd-6527-4547-8085-2bc46041383b-kube-api-access-vn2b8\") pod \"openshift-apiserver-operator-846cbfc458-vhpw4\" (UID: \"9136f9dd-6527-4547-8085-2bc46041383b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vhpw4" Feb 17 00:10:47 crc kubenswrapper[5109]: E0217 00:10:47.408835 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:47.908814353 +0000 UTC m=+119.240369111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.409255 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/4295a9b8-bd4b-4d7c-8499-1c407ff83e5f-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-8dzhp\" (UID: \"4295a9b8-bd4b-4d7c-8499-1c407ff83e5f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-8dzhp" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.409311 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6fd08da-3534-483e-a717-cad005275c5a-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-6nhg5\" (UID: \"a6fd08da-3534-483e-a717-cad005275c5a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6nhg5" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.409333 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5254t\" (UniqueName: \"kubernetes.io/projected/a6fd08da-3534-483e-a717-cad005275c5a-kube-api-access-5254t\") pod \"authentication-operator-7f5c659b84-6nhg5\" (UID: \"a6fd08da-3534-483e-a717-cad005275c5a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6nhg5" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.409741 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4295a9b8-bd4b-4d7c-8499-1c407ff83e5f-tmp\") pod \"cluster-image-registry-operator-86c45576b9-8dzhp\" (UID: \"4295a9b8-bd4b-4d7c-8499-1c407ff83e5f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-8dzhp" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.409885 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcfae8bf-91d7-48d3-a978-1510fe282c92-config\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.410435 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9c5b02a2-437a-46c3-b4ce-d856b61053f6-registry-tls\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.410513 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9c5b02a2-437a-46c3-b4ce-d856b61053f6-installation-pull-secrets\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.410543 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6335485b-ac6f-4574-9590-d11aee2f8cf5-tmp-dir\") pod \"dns-operator-799b87ffcd-dqtqd\" (UID: \"6335485b-ac6f-4574-9590-d11aee2f8cf5\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-dqtqd" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.410667 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4295a9b8-bd4b-4d7c-8499-1c407ff83e5f-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-8dzhp\" (UID: \"4295a9b8-bd4b-4d7c-8499-1c407ff83e5f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-8dzhp" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.410711 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fcfae8bf-91d7-48d3-a978-1510fe282c92-audit\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.410729 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6fd08da-3534-483e-a717-cad005275c5a-serving-cert\") pod \"authentication-operator-7f5c659b84-6nhg5\" (UID: \"a6fd08da-3534-483e-a717-cad005275c5a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6nhg5" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.410757 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9c5b02a2-437a-46c3-b4ce-d856b61053f6-bound-sa-token\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.410964 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9zg2\" (UniqueName: \"kubernetes.io/projected/4295a9b8-bd4b-4d7c-8499-1c407ff83e5f-kube-api-access-w9zg2\") pod \"cluster-image-registry-operator-86c45576b9-8dzhp\" (UID: \"4295a9b8-bd4b-4d7c-8499-1c407ff83e5f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-8dzhp" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.411095 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fcfae8bf-91d7-48d3-a978-1510fe282c92-image-import-ca\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.411136 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a4d85031-8c4b-4260-9279-77b7e3a7d75d-default-certificate\") pod \"router-default-68cf44c8b8-rw5p4\" (UID: \"a4d85031-8c4b-4260-9279-77b7e3a7d75d\") " pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.411157 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4e260ed-b1aa-426b-b93a-0b15c08ca7ed-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-fp844\" (UID: \"b4e260ed-b1aa-426b-b93a-0b15c08ca7ed\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fp844" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.411211 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fcfae8bf-91d7-48d3-a978-1510fe282c92-audit-dir\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.411278 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fcfae8bf-91d7-48d3-a978-1510fe282c92-node-pullsecrets\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.411304 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b4e260ed-b1aa-426b-b93a-0b15c08ca7ed-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-fp844\" (UID: \"b4e260ed-b1aa-426b-b93a-0b15c08ca7ed\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fp844" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.411352 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4d85031-8c4b-4260-9279-77b7e3a7d75d-service-ca-bundle\") pod \"router-default-68cf44c8b8-rw5p4\" (UID: \"a4d85031-8c4b-4260-9279-77b7e3a7d75d\") " pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.411381 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a4d85031-8c4b-4260-9279-77b7e3a7d75d-stats-auth\") pod \"router-default-68cf44c8b8-rw5p4\" (UID: \"a4d85031-8c4b-4260-9279-77b7e3a7d75d\") " pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.411574 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bwgz\" (UniqueName: \"kubernetes.io/projected/6335485b-ac6f-4574-9590-d11aee2f8cf5-kube-api-access-5bwgz\") pod \"dns-operator-799b87ffcd-dqtqd\" (UID: \"6335485b-ac6f-4574-9590-d11aee2f8cf5\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-dqtqd" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.411656 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn89w\" (UniqueName: \"kubernetes.io/projected/fcfae8bf-91d7-48d3-a978-1510fe282c92-kube-api-access-pn89w\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.411683 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9136f9dd-6527-4547-8085-2bc46041383b-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-vhpw4\" (UID: \"9136f9dd-6527-4547-8085-2bc46041383b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vhpw4" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.412254 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6fd08da-3534-483e-a717-cad005275c5a-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-6nhg5\" (UID: \"a6fd08da-3534-483e-a717-cad005275c5a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6nhg5" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.412298 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9136f9dd-6527-4547-8085-2bc46041383b-config\") pod \"openshift-apiserver-operator-846cbfc458-vhpw4\" (UID: \"9136f9dd-6527-4547-8085-2bc46041383b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vhpw4" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.428848 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.447433 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.463528 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-rgvbj" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.514268 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.514435 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4e260ed-b1aa-426b-b93a-0b15c08ca7ed-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-fp844\" (UID: \"b4e260ed-b1aa-426b-b93a-0b15c08ca7ed\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fp844" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.514462 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4b9beca8-fe19-4c26-81e5-cdd53b27036a-certs\") pod \"machine-config-server-cbqfq\" (UID: \"4b9beca8-fe19-4c26-81e5-cdd53b27036a\") " pod="openshift-machine-config-operator/machine-config-server-cbqfq" Feb 17 00:10:47 crc kubenswrapper[5109]: E0217 00:10:47.514500 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:48.014458548 +0000 UTC m=+119.346013336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.514639 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jvfp\" (UniqueName: \"kubernetes.io/projected/cf6fbed8-60b9-46db-a4e7-efc414eed3c3-kube-api-access-8jvfp\") pod \"olm-operator-5cdf44d969-9mrrp\" (UID: \"cf6fbed8-60b9-46db-a4e7-efc414eed3c3\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9mrrp" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.514792 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4d85031-8c4b-4260-9279-77b7e3a7d75d-service-ca-bundle\") pod \"router-default-68cf44c8b8-rw5p4\" (UID: \"a4d85031-8c4b-4260-9279-77b7e3a7d75d\") " pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.514863 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a4d85031-8c4b-4260-9279-77b7e3a7d75d-stats-auth\") pod \"router-default-68cf44c8b8-rw5p4\" (UID: \"a4d85031-8c4b-4260-9279-77b7e3a7d75d\") " pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.514898 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr25v\" (UniqueName: \"kubernetes.io/projected/4b9beca8-fe19-4c26-81e5-cdd53b27036a-kube-api-access-cr25v\") pod \"machine-config-server-cbqfq\" (UID: \"4b9beca8-fe19-4c26-81e5-cdd53b27036a\") " pod="openshift-machine-config-operator/machine-config-server-cbqfq" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.514925 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/693f1ea3-457d-4233-844f-1125adaa9fa9-mountpoint-dir\") pod \"csi-hostpathplugin-546f6\" (UID: \"693f1ea3-457d-4233-844f-1125adaa9fa9\") " pod="hostpath-provisioner/csi-hostpathplugin-546f6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.514969 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5bwgz\" (UniqueName: \"kubernetes.io/projected/6335485b-ac6f-4574-9590-d11aee2f8cf5-kube-api-access-5bwgz\") pod \"dns-operator-799b87ffcd-dqtqd\" (UID: \"6335485b-ac6f-4574-9590-d11aee2f8cf5\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-dqtqd" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.514995 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/59b9b2cd-1da5-4c42-8ec2-fff0e1726882-signing-cabundle\") pod \"service-ca-74545575db-jdfgh\" (UID: \"59b9b2cd-1da5-4c42-8ec2-fff0e1726882\") " pod="openshift-service-ca/service-ca-74545575db-jdfgh" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515050 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0a192868-bc11-40e5-92ff-01df13a30588-apiservice-cert\") pod \"packageserver-7d4fc7d867-47trg\" (UID: \"0a192868-bc11-40e5-92ff-01df13a30588\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-47trg" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515080 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9136f9dd-6527-4547-8085-2bc46041383b-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-vhpw4\" (UID: \"9136f9dd-6527-4547-8085-2bc46041383b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vhpw4" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515105 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fcfae8bf-91d7-48d3-a978-1510fe282c92-etcd-client\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515127 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c21fab5e-ccd1-41fb-8fb4-588d0fd9ebb1-serving-cert\") pod \"service-ca-operator-5b9c976747-4xcq8\" (UID: \"c21fab5e-ccd1-41fb-8fb4-588d0fd9ebb1\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4xcq8" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515149 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cf6fbed8-60b9-46db-a4e7-efc414eed3c3-profile-collector-cert\") pod \"olm-operator-5cdf44d969-9mrrp\" (UID: \"cf6fbed8-60b9-46db-a4e7-efc414eed3c3\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9mrrp" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515177 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b63354c-ebcc-4f47-895a-dc4361085ce6-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-4rxx6\" (UID: \"6b63354c-ebcc-4f47-895a-dc4361085ce6\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4rxx6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515201 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6b63354c-ebcc-4f47-895a-dc4361085ce6-images\") pod \"machine-config-operator-67c9d58cbb-4rxx6\" (UID: \"6b63354c-ebcc-4f47-895a-dc4361085ce6\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4rxx6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515225 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d0a7d391-0b51-463d-a695-b09800b9efba-etcd-ca\") pod \"etcd-operator-69b85846b6-nxc72\" (UID: \"d0a7d391-0b51-463d-a695-b09800b9efba\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nxc72" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515257 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a7fdfe97-5098-4468-92b7-881bc4270004-tmp\") pod \"marketplace-operator-547dbd544d-8f27m\" (UID: \"a7fdfe97-5098-4468-92b7-881bc4270004\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-8f27m" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515278 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0a192868-bc11-40e5-92ff-01df13a30588-webhook-cert\") pod \"packageserver-7d4fc7d867-47trg\" (UID: \"0a192868-bc11-40e5-92ff-01df13a30588\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-47trg" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515318 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9136f9dd-6527-4547-8085-2bc46041383b-config\") pod \"openshift-apiserver-operator-846cbfc458-vhpw4\" (UID: \"9136f9dd-6527-4547-8085-2bc46041383b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vhpw4" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515342 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb2dv\" (UniqueName: \"kubernetes.io/projected/9e0b7021-c99b-4fab-9c19-51affb4ad611-kube-api-access-qb2dv\") pod \"ingress-operator-6b9cb4dbcf-2bqqx\" (UID: \"9e0b7021-c99b-4fab-9c19-51affb4ad611\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2bqqx" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515364 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwmrb\" (UniqueName: \"kubernetes.io/projected/cd7d724c-de71-4a3f-b43d-1d09799504cb-kube-api-access-wwmrb\") pod \"kube-storage-version-migrator-operator-565b79b866-nz44x\" (UID: \"cd7d724c-de71-4a3f-b43d-1d09799504cb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-nz44x" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515403 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldl8r\" (UniqueName: \"kubernetes.io/projected/0a192868-bc11-40e5-92ff-01df13a30588-kube-api-access-ldl8r\") pod \"packageserver-7d4fc7d867-47trg\" (UID: \"0a192868-bc11-40e5-92ff-01df13a30588\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-47trg" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515427 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vn2b8\" (UniqueName: \"kubernetes.io/projected/9136f9dd-6527-4547-8085-2bc46041383b-kube-api-access-vn2b8\") pod \"openshift-apiserver-operator-846cbfc458-vhpw4\" (UID: \"9136f9dd-6527-4547-8085-2bc46041383b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vhpw4" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515470 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4d85031-8c4b-4260-9279-77b7e3a7d75d-metrics-certs\") pod \"router-default-68cf44c8b8-rw5p4\" (UID: \"a4d85031-8c4b-4260-9279-77b7e3a7d75d\") " pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515499 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fcfae8bf-91d7-48d3-a978-1510fe282c92-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515519 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcfae8bf-91d7-48d3-a978-1510fe282c92-serving-cert\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515548 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9c5b02a2-437a-46c3-b4ce-d856b61053f6-registry-certificates\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515572 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7584ea80-3537-4eda-a17d-6d0ef3c0d7ca-webhook-certs\") pod \"multus-admission-controller-69db94689b-tw52v\" (UID: \"7584ea80-3537-4eda-a17d-6d0ef3c0d7ca\") " pod="openshift-multus/multus-admission-controller-69db94689b-tw52v" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515614 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/693f1ea3-457d-4233-844f-1125adaa9fa9-plugins-dir\") pod \"csi-hostpathplugin-546f6\" (UID: \"693f1ea3-457d-4233-844f-1125adaa9fa9\") " pod="hostpath-provisioner/csi-hostpathplugin-546f6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515672 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4e260ed-b1aa-426b-b93a-0b15c08ca7ed-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-fp844\" (UID: \"b4e260ed-b1aa-426b-b93a-0b15c08ca7ed\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fp844" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515695 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4e260ed-b1aa-426b-b93a-0b15c08ca7ed-config\") pod \"openshift-kube-scheduler-operator-54f497555d-fp844\" (UID: \"b4e260ed-b1aa-426b-b93a-0b15c08ca7ed\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fp844" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515721 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v65w5\" (UniqueName: \"kubernetes.io/projected/2ffee5ff-84cf-4dfa-816b-ca1f8b763069-kube-api-access-v65w5\") pod \"collect-profiles-29521440-rcf5s\" (UID: \"2ffee5ff-84cf-4dfa-816b-ca1f8b763069\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-rcf5s" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515748 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fcfae8bf-91d7-48d3-a978-1510fe282c92-encryption-config\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515789 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6335485b-ac6f-4574-9590-d11aee2f8cf5-metrics-tls\") pod \"dns-operator-799b87ffcd-dqtqd\" (UID: \"6335485b-ac6f-4574-9590-d11aee2f8cf5\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-dqtqd" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515812 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-94tsc\" (UniqueName: \"kubernetes.io/projected/a4d85031-8c4b-4260-9279-77b7e3a7d75d-kube-api-access-94tsc\") pod \"router-default-68cf44c8b8-rw5p4\" (UID: \"a4d85031-8c4b-4260-9279-77b7e3a7d75d\") " pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515840 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a095f37e-9225-494c-908e-66bfa750642b-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-6kxlb\" (UID: \"a095f37e-9225-494c-908e-66bfa750642b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6kxlb" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515862 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ce76fa1-2f29-4c5a-b2d6-7869af7b25a8-config-volume\") pod \"dns-default-5mdds\" (UID: \"4ce76fa1-2f29-4c5a-b2d6-7869af7b25a8\") " pod="openshift-dns/dns-default-5mdds" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515910 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/4295a9b8-bd4b-4d7c-8499-1c407ff83e5f-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-8dzhp\" (UID: \"4295a9b8-bd4b-4d7c-8499-1c407ff83e5f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-8dzhp" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515932 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cf6fbed8-60b9-46db-a4e7-efc414eed3c3-tmpfs\") pod \"olm-operator-5cdf44d969-9mrrp\" (UID: \"cf6fbed8-60b9-46db-a4e7-efc414eed3c3\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9mrrp" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515953 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37d92c5c-658a-4b48-bbb5-db94b12d98a1-cert\") pod \"ingress-canary-g5z9b\" (UID: \"37d92c5c-658a-4b48-bbb5-db94b12d98a1\") " pod="openshift-ingress-canary/ingress-canary-g5z9b" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.515975 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b3e6e84e-201d-45cb-a34e-351fcc111c55-ready\") pod \"cni-sysctl-allowlist-ds-mrr4k\" (UID: \"b3e6e84e-201d-45cb-a34e-351fcc111c55\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mrr4k" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.516018 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjjvg\" (UniqueName: \"kubernetes.io/projected/a095f37e-9225-494c-908e-66bfa750642b-kube-api-access-hjjvg\") pod \"catalog-operator-75ff9f647d-6kxlb\" (UID: \"a095f37e-9225-494c-908e-66bfa750642b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6kxlb" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.516034 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4d85031-8c4b-4260-9279-77b7e3a7d75d-service-ca-bundle\") pod \"router-default-68cf44c8b8-rw5p4\" (UID: \"a4d85031-8c4b-4260-9279-77b7e3a7d75d\") " pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.516088 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4b9beca8-fe19-4c26-81e5-cdd53b27036a-node-bootstrap-token\") pod \"machine-config-server-cbqfq\" (UID: \"4b9beca8-fe19-4c26-81e5-cdd53b27036a\") " pod="openshift-machine-config-operator/machine-config-server-cbqfq" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.516188 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9c5b02a2-437a-46c3-b4ce-d856b61053f6-registry-tls\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.516228 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9c5b02a2-437a-46c3-b4ce-d856b61053f6-installation-pull-secrets\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.516263 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9448c632-b480-4584-be47-2b81207d4346-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-xmqnl\" (UID: \"9448c632-b480-4584-be47-2b81207d4346\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xmqnl" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.516315 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fcfae8bf-91d7-48d3-a978-1510fe282c92-audit\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.516371 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9448c632-b480-4584-be47-2b81207d4346-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-xmqnl\" (UID: \"9448c632-b480-4584-be47-2b81207d4346\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xmqnl" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.516401 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b3e6e84e-201d-45cb-a34e-351fcc111c55-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-mrr4k\" (UID: \"b3e6e84e-201d-45cb-a34e-351fcc111c55\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mrr4k" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.516443 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2ffee5ff-84cf-4dfa-816b-ca1f8b763069-secret-volume\") pod \"collect-profiles-29521440-rcf5s\" (UID: \"2ffee5ff-84cf-4dfa-816b-ca1f8b763069\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-rcf5s" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.516472 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6b8n\" (UniqueName: \"kubernetes.io/projected/e3bc1644-c40b-4971-9b76-c7c63a334ed3-kube-api-access-g6b8n\") pod \"migrator-866fcbc849-mlw2f\" (UID: \"e3bc1644-c40b-4971-9b76-c7c63a334ed3\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-mlw2f" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.516511 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9c5b02a2-437a-46c3-b4ce-d856b61053f6-bound-sa-token\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.516545 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g4bz\" (UniqueName: \"kubernetes.io/projected/9448c632-b480-4584-be47-2b81207d4346-kube-api-access-5g4bz\") pod \"machine-config-controller-f9cdd68f7-xmqnl\" (UID: \"9448c632-b480-4584-be47-2b81207d4346\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xmqnl" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.516575 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0a7d391-0b51-463d-a695-b09800b9efba-serving-cert\") pod \"etcd-operator-69b85846b6-nxc72\" (UID: \"d0a7d391-0b51-463d-a695-b09800b9efba\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nxc72" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.516828 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fcfae8bf-91d7-48d3-a978-1510fe282c92-image-import-ca\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.516868 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7kdd\" (UniqueName: \"kubernetes.io/projected/03c9c6ff-db85-463c-8af8-5589f2af76f0-kube-api-access-b7kdd\") pod \"package-server-manager-77f986bd66-2tl49\" (UID: \"03c9c6ff-db85-463c-8af8-5589f2af76f0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2tl49" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.516932 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a4d85031-8c4b-4260-9279-77b7e3a7d75d-default-certificate\") pod \"router-default-68cf44c8b8-rw5p4\" (UID: \"a4d85031-8c4b-4260-9279-77b7e3a7d75d\") " pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.516969 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvwvd\" (UniqueName: \"kubernetes.io/projected/7584ea80-3537-4eda-a17d-6d0ef3c0d7ca-kube-api-access-xvwvd\") pod \"multus-admission-controller-69db94689b-tw52v\" (UID: \"7584ea80-3537-4eda-a17d-6d0ef3c0d7ca\") " pod="openshift-multus/multus-admission-controller-69db94689b-tw52v" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.517003 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrwgd\" (UniqueName: \"kubernetes.io/projected/99aef4f8-4236-448e-94bb-cca311ff5d9b-kube-api-access-xrwgd\") pod \"control-plane-machine-set-operator-75ffdb6fcd-cwkk6\" (UID: \"99aef4f8-4236-448e-94bb-cca311ff5d9b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-cwkk6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.517034 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fcfae8bf-91d7-48d3-a978-1510fe282c92-node-pullsecrets\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.517061 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b4e260ed-b1aa-426b-b93a-0b15c08ca7ed-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-fp844\" (UID: \"b4e260ed-b1aa-426b-b93a-0b15c08ca7ed\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fp844" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.517093 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8w6p\" (UniqueName: \"kubernetes.io/projected/c21fab5e-ccd1-41fb-8fb4-588d0fd9ebb1-kube-api-access-n8w6p\") pod \"service-ca-operator-5b9c976747-4xcq8\" (UID: \"c21fab5e-ccd1-41fb-8fb4-588d0fd9ebb1\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4xcq8" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.517121 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cf6fbed8-60b9-46db-a4e7-efc414eed3c3-srv-cert\") pod \"olm-operator-5cdf44d969-9mrrp\" (UID: \"cf6fbed8-60b9-46db-a4e7-efc414eed3c3\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9mrrp" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.517200 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a095f37e-9225-494c-908e-66bfa750642b-tmpfs\") pod \"catalog-operator-75ff9f647d-6kxlb\" (UID: \"a095f37e-9225-494c-908e-66bfa750642b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6kxlb" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.518384 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pn89w\" (UniqueName: \"kubernetes.io/projected/fcfae8bf-91d7-48d3-a978-1510fe282c92-kube-api-access-pn89w\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.518411 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4ce76fa1-2f29-4c5a-b2d6-7869af7b25a8-metrics-tls\") pod \"dns-default-5mdds\" (UID: \"4ce76fa1-2f29-4c5a-b2d6-7869af7b25a8\") " pod="openshift-dns/dns-default-5mdds" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.518436 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/693f1ea3-457d-4233-844f-1125adaa9fa9-registration-dir\") pod \"csi-hostpathplugin-546f6\" (UID: \"693f1ea3-457d-4233-844f-1125adaa9fa9\") " pod="hostpath-provisioner/csi-hostpathplugin-546f6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.518484 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghhlb\" (UniqueName: \"kubernetes.io/projected/4ce76fa1-2f29-4c5a-b2d6-7869af7b25a8-kube-api-access-ghhlb\") pod \"dns-default-5mdds\" (UID: \"4ce76fa1-2f29-4c5a-b2d6-7869af7b25a8\") " pod="openshift-dns/dns-default-5mdds" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.518525 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfdzk\" (UniqueName: \"kubernetes.io/projected/1da6d131-80fb-4fce-9613-5dc3c320889f-kube-api-access-qfdzk\") pod \"cluster-samples-operator-6b564684c8-vmcp7\" (UID: \"1da6d131-80fb-4fce-9613-5dc3c320889f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vmcp7" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.518542 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4ce76fa1-2f29-4c5a-b2d6-7869af7b25a8-tmp-dir\") pod \"dns-default-5mdds\" (UID: \"4ce76fa1-2f29-4c5a-b2d6-7869af7b25a8\") " pod="openshift-dns/dns-default-5mdds" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.518562 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d0a7d391-0b51-463d-a695-b09800b9efba-etcd-client\") pod \"etcd-operator-69b85846b6-nxc72\" (UID: \"d0a7d391-0b51-463d-a695-b09800b9efba\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nxc72" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.518586 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c21fab5e-ccd1-41fb-8fb4-588d0fd9ebb1-config\") pod \"service-ca-operator-5b9c976747-4xcq8\" (UID: \"c21fab5e-ccd1-41fb-8fb4-588d0fd9ebb1\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4xcq8" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.518627 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlbg9\" (UniqueName: \"kubernetes.io/projected/6b63354c-ebcc-4f47-895a-dc4361085ce6-kube-api-access-zlbg9\") pod \"machine-config-operator-67c9d58cbb-4rxx6\" (UID: \"6b63354c-ebcc-4f47-895a-dc4361085ce6\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4rxx6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.518646 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d0a7d391-0b51-463d-a695-b09800b9efba-etcd-service-ca\") pod \"etcd-operator-69b85846b6-nxc72\" (UID: \"d0a7d391-0b51-463d-a695-b09800b9efba\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nxc72" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.518959 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9c5b02a2-437a-46c3-b4ce-d856b61053f6-registry-certificates\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.519774 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6fd08da-3534-483e-a717-cad005275c5a-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-6nhg5\" (UID: \"a6fd08da-3534-483e-a717-cad005275c5a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6nhg5" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.519829 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/693f1ea3-457d-4233-844f-1125adaa9fa9-csi-data-dir\") pod \"csi-hostpathplugin-546f6\" (UID: \"693f1ea3-457d-4233-844f-1125adaa9fa9\") " pod="hostpath-provisioner/csi-hostpathplugin-546f6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.519861 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0a7d391-0b51-463d-a695-b09800b9efba-config\") pod \"etcd-operator-69b85846b6-nxc72\" (UID: \"d0a7d391-0b51-463d-a695-b09800b9efba\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nxc72" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.519906 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd7d724c-de71-4a3f-b43d-1d09799504cb-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-nz44x\" (UID: \"cd7d724c-de71-4a3f-b43d-1d09799504cb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-nz44x" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.519932 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/99aef4f8-4236-448e-94bb-cca311ff5d9b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-cwkk6\" (UID: \"99aef4f8-4236-448e-94bb-cca311ff5d9b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-cwkk6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.519958 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e0b7021-c99b-4fab-9c19-51affb4ad611-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-2bqqx\" (UID: \"9e0b7021-c99b-4fab-9c19-51affb4ad611\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2bqqx" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.519982 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fcfae8bf-91d7-48d3-a978-1510fe282c92-audit-dir\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.520059 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgt4f\" (UniqueName: \"kubernetes.io/projected/37d92c5c-658a-4b48-bbb5-db94b12d98a1-kube-api-access-wgt4f\") pod \"ingress-canary-g5z9b\" (UID: \"37d92c5c-658a-4b48-bbb5-db94b12d98a1\") " pod="openshift-ingress-canary/ingress-canary-g5z9b" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.520107 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6cgl\" (UniqueName: \"kubernetes.io/projected/a7fdfe97-5098-4468-92b7-881bc4270004-kube-api-access-v6cgl\") pod \"marketplace-operator-547dbd544d-8f27m\" (UID: \"a7fdfe97-5098-4468-92b7-881bc4270004\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-8f27m" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.520133 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxddw\" (UniqueName: \"kubernetes.io/projected/693f1ea3-457d-4233-844f-1125adaa9fa9-kube-api-access-dxddw\") pod \"csi-hostpathplugin-546f6\" (UID: \"693f1ea3-457d-4233-844f-1125adaa9fa9\") " pod="hostpath-provisioner/csi-hostpathplugin-546f6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.520158 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4295a9b8-bd4b-4d7c-8499-1c407ff83e5f-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-8dzhp\" (UID: \"4295a9b8-bd4b-4d7c-8499-1c407ff83e5f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-8dzhp" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.520182 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6fd08da-3534-483e-a717-cad005275c5a-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-6nhg5\" (UID: \"a6fd08da-3534-483e-a717-cad005275c5a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6nhg5" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.520207 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5254t\" (UniqueName: \"kubernetes.io/projected/a6fd08da-3534-483e-a717-cad005275c5a-kube-api-access-5254t\") pod \"authentication-operator-7f5c659b84-6nhg5\" (UID: \"a6fd08da-3534-483e-a717-cad005275c5a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6nhg5" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.520261 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.520289 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4295a9b8-bd4b-4d7c-8499-1c407ff83e5f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-8dzhp\" (UID: \"4295a9b8-bd4b-4d7c-8499-1c407ff83e5f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-8dzhp" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.520314 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b63354c-ebcc-4f47-895a-dc4361085ce6-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-4rxx6\" (UID: \"6b63354c-ebcc-4f47-895a-dc4361085ce6\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4rxx6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.520347 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6fd08da-3534-483e-a717-cad005275c5a-config\") pod \"authentication-operator-7f5c659b84-6nhg5\" (UID: \"a6fd08da-3534-483e-a717-cad005275c5a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6nhg5" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.520382 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9c5b02a2-437a-46c3-b4ce-d856b61053f6-ca-trust-extracted\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.520396 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9136f9dd-6527-4547-8085-2bc46041383b-config\") pod \"openshift-apiserver-operator-846cbfc458-vhpw4\" (UID: \"9136f9dd-6527-4547-8085-2bc46041383b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vhpw4" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.520406 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c5b02a2-437a-46c3-b4ce-d856b61053f6-trusted-ca\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.520443 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a7fdfe97-5098-4468-92b7-881bc4270004-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-8f27m\" (UID: \"a7fdfe97-5098-4468-92b7-881bc4270004\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-8f27m" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.520468 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/59b9b2cd-1da5-4c42-8ec2-fff0e1726882-signing-key\") pod \"service-ca-74545575db-jdfgh\" (UID: \"59b9b2cd-1da5-4c42-8ec2-fff0e1726882\") " pod="openshift-service-ca/service-ca-74545575db-jdfgh" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.520491 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb5bb\" (UniqueName: \"kubernetes.io/projected/59b9b2cd-1da5-4c42-8ec2-fff0e1726882-kube-api-access-lb5bb\") pod \"service-ca-74545575db-jdfgh\" (UID: \"59b9b2cd-1da5-4c42-8ec2-fff0e1726882\") " pod="openshift-service-ca/service-ca-74545575db-jdfgh" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.520514 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0a192868-bc11-40e5-92ff-01df13a30588-tmpfs\") pod \"packageserver-7d4fc7d867-47trg\" (UID: \"0a192868-bc11-40e5-92ff-01df13a30588\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-47trg" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.520545 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hjcdm\" (UniqueName: \"kubernetes.io/projected/9c5b02a2-437a-46c3-b4ce-d856b61053f6-kube-api-access-hjcdm\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.520567 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcfae8bf-91d7-48d3-a978-1510fe282c92-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.519785 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fcfae8bf-91d7-48d3-a978-1510fe282c92-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.520859 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/03c9c6ff-db85-463c-8af8-5589f2af76f0-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-2tl49\" (UID: \"03c9c6ff-db85-463c-8af8-5589f2af76f0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2tl49" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.520903 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pjj8\" (UniqueName: \"kubernetes.io/projected/d0a7d391-0b51-463d-a695-b09800b9efba-kube-api-access-6pjj8\") pod \"etcd-operator-69b85846b6-nxc72\" (UID: \"d0a7d391-0b51-463d-a695-b09800b9efba\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nxc72" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.520959 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcfae8bf-91d7-48d3-a978-1510fe282c92-config\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.521011 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4295a9b8-bd4b-4d7c-8499-1c407ff83e5f-tmp\") pod \"cluster-image-registry-operator-86c45576b9-8dzhp\" (UID: \"4295a9b8-bd4b-4d7c-8499-1c407ff83e5f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-8dzhp" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.521070 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd7d724c-de71-4a3f-b43d-1d09799504cb-config\") pod \"kube-storage-version-migrator-operator-565b79b866-nz44x\" (UID: \"cd7d724c-de71-4a3f-b43d-1d09799504cb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-nz44x" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.521096 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a7fdfe97-5098-4468-92b7-881bc4270004-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-8f27m\" (UID: \"a7fdfe97-5098-4468-92b7-881bc4270004\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-8f27m" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.521124 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a095f37e-9225-494c-908e-66bfa750642b-srv-cert\") pod \"catalog-operator-75ff9f647d-6kxlb\" (UID: \"a095f37e-9225-494c-908e-66bfa750642b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6kxlb" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.521146 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d0a7d391-0b51-463d-a695-b09800b9efba-tmp-dir\") pod \"etcd-operator-69b85846b6-nxc72\" (UID: \"d0a7d391-0b51-463d-a695-b09800b9efba\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nxc72" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.521237 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6fd08da-3534-483e-a717-cad005275c5a-serving-cert\") pod \"authentication-operator-7f5c659b84-6nhg5\" (UID: \"a6fd08da-3534-483e-a717-cad005275c5a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6nhg5" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.521263 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1da6d131-80fb-4fce-9613-5dc3c320889f-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-vmcp7\" (UID: \"1da6d131-80fb-4fce-9613-5dc3c320889f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vmcp7" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.521269 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6fd08da-3534-483e-a717-cad005275c5a-config\") pod \"authentication-operator-7f5c659b84-6nhg5\" (UID: \"a6fd08da-3534-483e-a717-cad005275c5a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6nhg5" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.521288 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9e0b7021-c99b-4fab-9c19-51affb4ad611-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-2bqqx\" (UID: \"9e0b7021-c99b-4fab-9c19-51affb4ad611\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2bqqx" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.521325 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e0b7021-c99b-4fab-9c19-51affb4ad611-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-2bqqx\" (UID: \"9e0b7021-c99b-4fab-9c19-51affb4ad611\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2bqqx" Feb 17 00:10:47 crc kubenswrapper[5109]: E0217 00:10:47.521382 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:48.02136305 +0000 UTC m=+119.352917808 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.521404 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6fd08da-3534-483e-a717-cad005275c5a-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-6nhg5\" (UID: \"a6fd08da-3534-483e-a717-cad005275c5a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6nhg5" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.522106 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4e260ed-b1aa-426b-b93a-0b15c08ca7ed-config\") pod \"openshift-kube-scheduler-operator-54f497555d-fp844\" (UID: \"b4e260ed-b1aa-426b-b93a-0b15c08ca7ed\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fp844" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.522449 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcfae8bf-91d7-48d3-a978-1510fe282c92-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.522904 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4295a9b8-bd4b-4d7c-8499-1c407ff83e5f-tmp\") pod \"cluster-image-registry-operator-86c45576b9-8dzhp\" (UID: \"4295a9b8-bd4b-4d7c-8499-1c407ff83e5f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-8dzhp" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.523096 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcfae8bf-91d7-48d3-a978-1510fe282c92-config\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.523122 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c5b02a2-437a-46c3-b4ce-d856b61053f6-trusted-ca\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.523471 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9c5b02a2-437a-46c3-b4ce-d856b61053f6-installation-pull-secrets\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.523507 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b4e260ed-b1aa-426b-b93a-0b15c08ca7ed-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-fp844\" (UID: \"b4e260ed-b1aa-426b-b93a-0b15c08ca7ed\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fp844" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.523548 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fcfae8bf-91d7-48d3-a978-1510fe282c92-audit-dir\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.523575 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fcfae8bf-91d7-48d3-a978-1510fe282c92-encryption-config\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.523695 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fcfae8bf-91d7-48d3-a978-1510fe282c92-node-pullsecrets\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.523797 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9kwl\" (UniqueName: \"kubernetes.io/projected/b3e6e84e-201d-45cb-a34e-351fcc111c55-kube-api-access-t9kwl\") pod \"cni-sysctl-allowlist-ds-mrr4k\" (UID: \"b3e6e84e-201d-45cb-a34e-351fcc111c55\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mrr4k" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.523835 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ffee5ff-84cf-4dfa-816b-ca1f8b763069-config-volume\") pod \"collect-profiles-29521440-rcf5s\" (UID: \"2ffee5ff-84cf-4dfa-816b-ca1f8b763069\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-rcf5s" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.524136 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9c5b02a2-437a-46c3-b4ce-d856b61053f6-ca-trust-extracted\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.524755 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6335485b-ac6f-4574-9590-d11aee2f8cf5-tmp-dir\") pod \"dns-operator-799b87ffcd-dqtqd\" (UID: \"6335485b-ac6f-4574-9590-d11aee2f8cf5\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-dqtqd" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.524804 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4295a9b8-bd4b-4d7c-8499-1c407ff83e5f-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-8dzhp\" (UID: \"4295a9b8-bd4b-4d7c-8499-1c407ff83e5f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-8dzhp" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.525165 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w9zg2\" (UniqueName: \"kubernetes.io/projected/4295a9b8-bd4b-4d7c-8499-1c407ff83e5f-kube-api-access-w9zg2\") pod \"cluster-image-registry-operator-86c45576b9-8dzhp\" (UID: \"4295a9b8-bd4b-4d7c-8499-1c407ff83e5f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-8dzhp" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.525843 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6335485b-ac6f-4574-9590-d11aee2f8cf5-tmp-dir\") pod \"dns-operator-799b87ffcd-dqtqd\" (UID: \"6335485b-ac6f-4574-9590-d11aee2f8cf5\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-dqtqd" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.525998 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fcfae8bf-91d7-48d3-a978-1510fe282c92-audit\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.526326 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4295a9b8-bd4b-4d7c-8499-1c407ff83e5f-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-8dzhp\" (UID: \"4295a9b8-bd4b-4d7c-8499-1c407ff83e5f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-8dzhp" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.526780 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b3e6e84e-201d-45cb-a34e-351fcc111c55-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-mrr4k\" (UID: \"b3e6e84e-201d-45cb-a34e-351fcc111c55\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mrr4k" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.526845 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/693f1ea3-457d-4233-844f-1125adaa9fa9-socket-dir\") pod \"csi-hostpathplugin-546f6\" (UID: \"693f1ea3-457d-4233-844f-1125adaa9fa9\") " pod="hostpath-provisioner/csi-hostpathplugin-546f6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.526988 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/4295a9b8-bd4b-4d7c-8499-1c407ff83e5f-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-8dzhp\" (UID: \"4295a9b8-bd4b-4d7c-8499-1c407ff83e5f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-8dzhp" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.527757 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9136f9dd-6527-4547-8085-2bc46041383b-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-vhpw4\" (UID: \"9136f9dd-6527-4547-8085-2bc46041383b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vhpw4" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.527915 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fcfae8bf-91d7-48d3-a978-1510fe282c92-image-import-ca\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.529701 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fcfae8bf-91d7-48d3-a978-1510fe282c92-etcd-client\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.529869 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcfae8bf-91d7-48d3-a978-1510fe282c92-serving-cert\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.530131 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6fd08da-3534-483e-a717-cad005275c5a-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-6nhg5\" (UID: \"a6fd08da-3534-483e-a717-cad005275c5a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6nhg5" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.533030 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4e260ed-b1aa-426b-b93a-0b15c08ca7ed-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-fp844\" (UID: \"b4e260ed-b1aa-426b-b93a-0b15c08ca7ed\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fp844" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.533189 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a4d85031-8c4b-4260-9279-77b7e3a7d75d-stats-auth\") pod \"router-default-68cf44c8b8-rw5p4\" (UID: \"a4d85031-8c4b-4260-9279-77b7e3a7d75d\") " pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.533279 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9c5b02a2-437a-46c3-b4ce-d856b61053f6-registry-tls\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.533709 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4295a9b8-bd4b-4d7c-8499-1c407ff83e5f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-8dzhp\" (UID: \"4295a9b8-bd4b-4d7c-8499-1c407ff83e5f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-8dzhp" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.534178 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4d85031-8c4b-4260-9279-77b7e3a7d75d-metrics-certs\") pod \"router-default-68cf44c8b8-rw5p4\" (UID: \"a4d85031-8c4b-4260-9279-77b7e3a7d75d\") " pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.535178 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6fd08da-3534-483e-a717-cad005275c5a-serving-cert\") pod \"authentication-operator-7f5c659b84-6nhg5\" (UID: \"a6fd08da-3534-483e-a717-cad005275c5a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6nhg5" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.536140 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a4d85031-8c4b-4260-9279-77b7e3a7d75d-default-certificate\") pod \"router-default-68cf44c8b8-rw5p4\" (UID: \"a4d85031-8c4b-4260-9279-77b7e3a7d75d\") " pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.540761 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6335485b-ac6f-4574-9590-d11aee2f8cf5-metrics-tls\") pod \"dns-operator-799b87ffcd-dqtqd\" (UID: \"6335485b-ac6f-4574-9590-d11aee2f8cf5\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-dqtqd" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.549447 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-f4gw4" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.582431 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4e260ed-b1aa-426b-b93a-0b15c08ca7ed-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-fp844\" (UID: \"b4e260ed-b1aa-426b-b93a-0b15c08ca7ed\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fp844" Feb 17 00:10:47 crc kubenswrapper[5109]: W0217 00:10:47.596232 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74763348_8544_4540_85b9_d85677c7c733.slice/crio-a812f99fb3351a903eb2aa5725f34471d570896ee63dc9263c41bd264cf4d5fb WatchSource:0}: Error finding container a812f99fb3351a903eb2aa5725f34471d570896ee63dc9263c41bd264cf4d5fb: Status 404 returned error can't find the container with id a812f99fb3351a903eb2aa5725f34471d570896ee63dc9263c41bd264cf4d5fb Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.601254 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bwgz\" (UniqueName: \"kubernetes.io/projected/6335485b-ac6f-4574-9590-d11aee2f8cf5-kube-api-access-5bwgz\") pod \"dns-operator-799b87ffcd-dqtqd\" (UID: \"6335485b-ac6f-4574-9590-d11aee2f8cf5\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-dqtqd" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.611960 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-94tsc\" (UniqueName: \"kubernetes.io/projected/a4d85031-8c4b-4260-9279-77b7e3a7d75d-kube-api-access-94tsc\") pod \"router-default-68cf44c8b8-rw5p4\" (UID: \"a4d85031-8c4b-4260-9279-77b7e3a7d75d\") " pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.625900 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn2b8\" (UniqueName: \"kubernetes.io/projected/9136f9dd-6527-4547-8085-2bc46041383b-kube-api-access-vn2b8\") pod \"openshift-apiserver-operator-846cbfc458-vhpw4\" (UID: \"9136f9dd-6527-4547-8085-2bc46041383b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vhpw4" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.633409 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:47 crc kubenswrapper[5109]: E0217 00:10:47.633609 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:48.133567517 +0000 UTC m=+119.465122275 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.633753 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/693f1ea3-457d-4233-844f-1125adaa9fa9-csi-data-dir\") pod \"csi-hostpathplugin-546f6\" (UID: \"693f1ea3-457d-4233-844f-1125adaa9fa9\") " pod="hostpath-provisioner/csi-hostpathplugin-546f6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.633778 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0a7d391-0b51-463d-a695-b09800b9efba-config\") pod \"etcd-operator-69b85846b6-nxc72\" (UID: \"d0a7d391-0b51-463d-a695-b09800b9efba\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nxc72" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.633804 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd7d724c-de71-4a3f-b43d-1d09799504cb-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-nz44x\" (UID: \"cd7d724c-de71-4a3f-b43d-1d09799504cb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-nz44x" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.633944 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/693f1ea3-457d-4233-844f-1125adaa9fa9-csi-data-dir\") pod \"csi-hostpathplugin-546f6\" (UID: \"693f1ea3-457d-4233-844f-1125adaa9fa9\") " pod="hostpath-provisioner/csi-hostpathplugin-546f6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.634340 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/99aef4f8-4236-448e-94bb-cca311ff5d9b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-cwkk6\" (UID: \"99aef4f8-4236-448e-94bb-cca311ff5d9b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-cwkk6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.634381 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e0b7021-c99b-4fab-9c19-51affb4ad611-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-2bqqx\" (UID: \"9e0b7021-c99b-4fab-9c19-51affb4ad611\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2bqqx" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.634437 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wgt4f\" (UniqueName: \"kubernetes.io/projected/37d92c5c-658a-4b48-bbb5-db94b12d98a1-kube-api-access-wgt4f\") pod \"ingress-canary-g5z9b\" (UID: \"37d92c5c-658a-4b48-bbb5-db94b12d98a1\") " pod="openshift-ingress-canary/ingress-canary-g5z9b" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.634459 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v6cgl\" (UniqueName: \"kubernetes.io/projected/a7fdfe97-5098-4468-92b7-881bc4270004-kube-api-access-v6cgl\") pod \"marketplace-operator-547dbd544d-8f27m\" (UID: \"a7fdfe97-5098-4468-92b7-881bc4270004\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-8f27m" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.634476 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dxddw\" (UniqueName: \"kubernetes.io/projected/693f1ea3-457d-4233-844f-1125adaa9fa9-kube-api-access-dxddw\") pod \"csi-hostpathplugin-546f6\" (UID: \"693f1ea3-457d-4233-844f-1125adaa9fa9\") " pod="hostpath-provisioner/csi-hostpathplugin-546f6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.634524 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.634543 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b63354c-ebcc-4f47-895a-dc4361085ce6-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-4rxx6\" (UID: \"6b63354c-ebcc-4f47-895a-dc4361085ce6\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4rxx6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.634583 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a7fdfe97-5098-4468-92b7-881bc4270004-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-8f27m\" (UID: \"a7fdfe97-5098-4468-92b7-881bc4270004\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-8f27m" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.634611 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/59b9b2cd-1da5-4c42-8ec2-fff0e1726882-signing-key\") pod \"service-ca-74545575db-jdfgh\" (UID: \"59b9b2cd-1da5-4c42-8ec2-fff0e1726882\") " pod="openshift-service-ca/service-ca-74545575db-jdfgh" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.634628 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lb5bb\" (UniqueName: \"kubernetes.io/projected/59b9b2cd-1da5-4c42-8ec2-fff0e1726882-kube-api-access-lb5bb\") pod \"service-ca-74545575db-jdfgh\" (UID: \"59b9b2cd-1da5-4c42-8ec2-fff0e1726882\") " pod="openshift-service-ca/service-ca-74545575db-jdfgh" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.634661 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0a192868-bc11-40e5-92ff-01df13a30588-tmpfs\") pod \"packageserver-7d4fc7d867-47trg\" (UID: \"0a192868-bc11-40e5-92ff-01df13a30588\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-47trg" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.634683 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/03c9c6ff-db85-463c-8af8-5589f2af76f0-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-2tl49\" (UID: \"03c9c6ff-db85-463c-8af8-5589f2af76f0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2tl49" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.634700 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6pjj8\" (UniqueName: \"kubernetes.io/projected/d0a7d391-0b51-463d-a695-b09800b9efba-kube-api-access-6pjj8\") pod \"etcd-operator-69b85846b6-nxc72\" (UID: \"d0a7d391-0b51-463d-a695-b09800b9efba\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nxc72" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.634752 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd7d724c-de71-4a3f-b43d-1d09799504cb-config\") pod \"kube-storage-version-migrator-operator-565b79b866-nz44x\" (UID: \"cd7d724c-de71-4a3f-b43d-1d09799504cb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-nz44x" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.634770 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a7fdfe97-5098-4468-92b7-881bc4270004-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-8f27m\" (UID: \"a7fdfe97-5098-4468-92b7-881bc4270004\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-8f27m" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.634787 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a095f37e-9225-494c-908e-66bfa750642b-srv-cert\") pod \"catalog-operator-75ff9f647d-6kxlb\" (UID: \"a095f37e-9225-494c-908e-66bfa750642b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6kxlb" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.634825 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d0a7d391-0b51-463d-a695-b09800b9efba-tmp-dir\") pod \"etcd-operator-69b85846b6-nxc72\" (UID: \"d0a7d391-0b51-463d-a695-b09800b9efba\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nxc72" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.634858 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1da6d131-80fb-4fce-9613-5dc3c320889f-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-vmcp7\" (UID: \"1da6d131-80fb-4fce-9613-5dc3c320889f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vmcp7" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.634874 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9e0b7021-c99b-4fab-9c19-51affb4ad611-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-2bqqx\" (UID: \"9e0b7021-c99b-4fab-9c19-51affb4ad611\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2bqqx" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.634904 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e0b7021-c99b-4fab-9c19-51affb4ad611-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-2bqqx\" (UID: \"9e0b7021-c99b-4fab-9c19-51affb4ad611\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2bqqx" Feb 17 00:10:47 crc kubenswrapper[5109]: E0217 00:10:47.634928 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:48.134907623 +0000 UTC m=+119.466462431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.634990 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t9kwl\" (UniqueName: \"kubernetes.io/projected/b3e6e84e-201d-45cb-a34e-351fcc111c55-kube-api-access-t9kwl\") pod \"cni-sysctl-allowlist-ds-mrr4k\" (UID: \"b3e6e84e-201d-45cb-a34e-351fcc111c55\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mrr4k" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635021 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ffee5ff-84cf-4dfa-816b-ca1f8b763069-config-volume\") pod \"collect-profiles-29521440-rcf5s\" (UID: \"2ffee5ff-84cf-4dfa-816b-ca1f8b763069\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-rcf5s" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635066 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b3e6e84e-201d-45cb-a34e-351fcc111c55-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-mrr4k\" (UID: \"b3e6e84e-201d-45cb-a34e-351fcc111c55\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mrr4k" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635092 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/693f1ea3-457d-4233-844f-1125adaa9fa9-socket-dir\") pod \"csi-hostpathplugin-546f6\" (UID: \"693f1ea3-457d-4233-844f-1125adaa9fa9\") " pod="hostpath-provisioner/csi-hostpathplugin-546f6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635113 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4b9beca8-fe19-4c26-81e5-cdd53b27036a-certs\") pod \"machine-config-server-cbqfq\" (UID: \"4b9beca8-fe19-4c26-81e5-cdd53b27036a\") " pod="openshift-machine-config-operator/machine-config-server-cbqfq" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635150 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8jvfp\" (UniqueName: \"kubernetes.io/projected/cf6fbed8-60b9-46db-a4e7-efc414eed3c3-kube-api-access-8jvfp\") pod \"olm-operator-5cdf44d969-9mrrp\" (UID: \"cf6fbed8-60b9-46db-a4e7-efc414eed3c3\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9mrrp" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635182 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cr25v\" (UniqueName: \"kubernetes.io/projected/4b9beca8-fe19-4c26-81e5-cdd53b27036a-kube-api-access-cr25v\") pod \"machine-config-server-cbqfq\" (UID: \"4b9beca8-fe19-4c26-81e5-cdd53b27036a\") " pod="openshift-machine-config-operator/machine-config-server-cbqfq" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635205 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/693f1ea3-457d-4233-844f-1125adaa9fa9-mountpoint-dir\") pod \"csi-hostpathplugin-546f6\" (UID: \"693f1ea3-457d-4233-844f-1125adaa9fa9\") " pod="hostpath-provisioner/csi-hostpathplugin-546f6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635273 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/59b9b2cd-1da5-4c42-8ec2-fff0e1726882-signing-cabundle\") pod \"service-ca-74545575db-jdfgh\" (UID: \"59b9b2cd-1da5-4c42-8ec2-fff0e1726882\") " pod="openshift-service-ca/service-ca-74545575db-jdfgh" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635302 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0a192868-bc11-40e5-92ff-01df13a30588-apiservice-cert\") pod \"packageserver-7d4fc7d867-47trg\" (UID: \"0a192868-bc11-40e5-92ff-01df13a30588\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-47trg" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635346 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c21fab5e-ccd1-41fb-8fb4-588d0fd9ebb1-serving-cert\") pod \"service-ca-operator-5b9c976747-4xcq8\" (UID: \"c21fab5e-ccd1-41fb-8fb4-588d0fd9ebb1\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4xcq8" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635367 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cf6fbed8-60b9-46db-a4e7-efc414eed3c3-profile-collector-cert\") pod \"olm-operator-5cdf44d969-9mrrp\" (UID: \"cf6fbed8-60b9-46db-a4e7-efc414eed3c3\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9mrrp" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635387 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b63354c-ebcc-4f47-895a-dc4361085ce6-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-4rxx6\" (UID: \"6b63354c-ebcc-4f47-895a-dc4361085ce6\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4rxx6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635410 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6b63354c-ebcc-4f47-895a-dc4361085ce6-images\") pod \"machine-config-operator-67c9d58cbb-4rxx6\" (UID: \"6b63354c-ebcc-4f47-895a-dc4361085ce6\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4rxx6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635428 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d0a7d391-0b51-463d-a695-b09800b9efba-etcd-ca\") pod \"etcd-operator-69b85846b6-nxc72\" (UID: \"d0a7d391-0b51-463d-a695-b09800b9efba\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nxc72" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635449 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a7fdfe97-5098-4468-92b7-881bc4270004-tmp\") pod \"marketplace-operator-547dbd544d-8f27m\" (UID: \"a7fdfe97-5098-4468-92b7-881bc4270004\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-8f27m" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635463 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0a192868-bc11-40e5-92ff-01df13a30588-webhook-cert\") pod \"packageserver-7d4fc7d867-47trg\" (UID: \"0a192868-bc11-40e5-92ff-01df13a30588\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-47trg" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635491 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qb2dv\" (UniqueName: \"kubernetes.io/projected/9e0b7021-c99b-4fab-9c19-51affb4ad611-kube-api-access-qb2dv\") pod \"ingress-operator-6b9cb4dbcf-2bqqx\" (UID: \"9e0b7021-c99b-4fab-9c19-51affb4ad611\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2bqqx" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635509 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wwmrb\" (UniqueName: \"kubernetes.io/projected/cd7d724c-de71-4a3f-b43d-1d09799504cb-kube-api-access-wwmrb\") pod \"kube-storage-version-migrator-operator-565b79b866-nz44x\" (UID: \"cd7d724c-de71-4a3f-b43d-1d09799504cb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-nz44x" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635531 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ldl8r\" (UniqueName: \"kubernetes.io/projected/0a192868-bc11-40e5-92ff-01df13a30588-kube-api-access-ldl8r\") pod \"packageserver-7d4fc7d867-47trg\" (UID: \"0a192868-bc11-40e5-92ff-01df13a30588\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-47trg" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635573 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7584ea80-3537-4eda-a17d-6d0ef3c0d7ca-webhook-certs\") pod \"multus-admission-controller-69db94689b-tw52v\" (UID: \"7584ea80-3537-4eda-a17d-6d0ef3c0d7ca\") " pod="openshift-multus/multus-admission-controller-69db94689b-tw52v" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635607 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/693f1ea3-457d-4233-844f-1125adaa9fa9-plugins-dir\") pod \"csi-hostpathplugin-546f6\" (UID: \"693f1ea3-457d-4233-844f-1125adaa9fa9\") " pod="hostpath-provisioner/csi-hostpathplugin-546f6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635633 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v65w5\" (UniqueName: \"kubernetes.io/projected/2ffee5ff-84cf-4dfa-816b-ca1f8b763069-kube-api-access-v65w5\") pod \"collect-profiles-29521440-rcf5s\" (UID: \"2ffee5ff-84cf-4dfa-816b-ca1f8b763069\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-rcf5s" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635667 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a095f37e-9225-494c-908e-66bfa750642b-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-6kxlb\" (UID: \"a095f37e-9225-494c-908e-66bfa750642b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6kxlb" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635682 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ce76fa1-2f29-4c5a-b2d6-7869af7b25a8-config-volume\") pod \"dns-default-5mdds\" (UID: \"4ce76fa1-2f29-4c5a-b2d6-7869af7b25a8\") " pod="openshift-dns/dns-default-5mdds" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635711 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cf6fbed8-60b9-46db-a4e7-efc414eed3c3-tmpfs\") pod \"olm-operator-5cdf44d969-9mrrp\" (UID: \"cf6fbed8-60b9-46db-a4e7-efc414eed3c3\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9mrrp" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635731 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37d92c5c-658a-4b48-bbb5-db94b12d98a1-cert\") pod \"ingress-canary-g5z9b\" (UID: \"37d92c5c-658a-4b48-bbb5-db94b12d98a1\") " pod="openshift-ingress-canary/ingress-canary-g5z9b" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635752 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b3e6e84e-201d-45cb-a34e-351fcc111c55-ready\") pod \"cni-sysctl-allowlist-ds-mrr4k\" (UID: \"b3e6e84e-201d-45cb-a34e-351fcc111c55\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mrr4k" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635776 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hjjvg\" (UniqueName: \"kubernetes.io/projected/a095f37e-9225-494c-908e-66bfa750642b-kube-api-access-hjjvg\") pod \"catalog-operator-75ff9f647d-6kxlb\" (UID: \"a095f37e-9225-494c-908e-66bfa750642b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6kxlb" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635797 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4b9beca8-fe19-4c26-81e5-cdd53b27036a-node-bootstrap-token\") pod \"machine-config-server-cbqfq\" (UID: \"4b9beca8-fe19-4c26-81e5-cdd53b27036a\") " pod="openshift-machine-config-operator/machine-config-server-cbqfq" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635827 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9448c632-b480-4584-be47-2b81207d4346-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-xmqnl\" (UID: \"9448c632-b480-4584-be47-2b81207d4346\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xmqnl" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635855 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9448c632-b480-4584-be47-2b81207d4346-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-xmqnl\" (UID: \"9448c632-b480-4584-be47-2b81207d4346\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xmqnl" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635871 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b3e6e84e-201d-45cb-a34e-351fcc111c55-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-mrr4k\" (UID: \"b3e6e84e-201d-45cb-a34e-351fcc111c55\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mrr4k" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635892 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2ffee5ff-84cf-4dfa-816b-ca1f8b763069-secret-volume\") pod \"collect-profiles-29521440-rcf5s\" (UID: \"2ffee5ff-84cf-4dfa-816b-ca1f8b763069\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-rcf5s" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635910 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g6b8n\" (UniqueName: \"kubernetes.io/projected/e3bc1644-c40b-4971-9b76-c7c63a334ed3-kube-api-access-g6b8n\") pod \"migrator-866fcbc849-mlw2f\" (UID: \"e3bc1644-c40b-4971-9b76-c7c63a334ed3\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-mlw2f" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635930 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5g4bz\" (UniqueName: \"kubernetes.io/projected/9448c632-b480-4584-be47-2b81207d4346-kube-api-access-5g4bz\") pod \"machine-config-controller-f9cdd68f7-xmqnl\" (UID: \"9448c632-b480-4584-be47-2b81207d4346\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xmqnl" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635947 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0a7d391-0b51-463d-a695-b09800b9efba-serving-cert\") pod \"etcd-operator-69b85846b6-nxc72\" (UID: \"d0a7d391-0b51-463d-a695-b09800b9efba\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nxc72" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635971 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b7kdd\" (UniqueName: \"kubernetes.io/projected/03c9c6ff-db85-463c-8af8-5589f2af76f0-kube-api-access-b7kdd\") pod \"package-server-manager-77f986bd66-2tl49\" (UID: \"03c9c6ff-db85-463c-8af8-5589f2af76f0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2tl49" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.635998 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xvwvd\" (UniqueName: \"kubernetes.io/projected/7584ea80-3537-4eda-a17d-6d0ef3c0d7ca-kube-api-access-xvwvd\") pod \"multus-admission-controller-69db94689b-tw52v\" (UID: \"7584ea80-3537-4eda-a17d-6d0ef3c0d7ca\") " pod="openshift-multus/multus-admission-controller-69db94689b-tw52v" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.636026 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xrwgd\" (UniqueName: \"kubernetes.io/projected/99aef4f8-4236-448e-94bb-cca311ff5d9b-kube-api-access-xrwgd\") pod \"control-plane-machine-set-operator-75ffdb6fcd-cwkk6\" (UID: \"99aef4f8-4236-448e-94bb-cca311ff5d9b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-cwkk6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.636061 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n8w6p\" (UniqueName: \"kubernetes.io/projected/c21fab5e-ccd1-41fb-8fb4-588d0fd9ebb1-kube-api-access-n8w6p\") pod \"service-ca-operator-5b9c976747-4xcq8\" (UID: \"c21fab5e-ccd1-41fb-8fb4-588d0fd9ebb1\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4xcq8" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.636078 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cf6fbed8-60b9-46db-a4e7-efc414eed3c3-srv-cert\") pod \"olm-operator-5cdf44d969-9mrrp\" (UID: \"cf6fbed8-60b9-46db-a4e7-efc414eed3c3\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9mrrp" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.636106 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a095f37e-9225-494c-908e-66bfa750642b-tmpfs\") pod \"catalog-operator-75ff9f647d-6kxlb\" (UID: \"a095f37e-9225-494c-908e-66bfa750642b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6kxlb" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.636138 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4ce76fa1-2f29-4c5a-b2d6-7869af7b25a8-metrics-tls\") pod \"dns-default-5mdds\" (UID: \"4ce76fa1-2f29-4c5a-b2d6-7869af7b25a8\") " pod="openshift-dns/dns-default-5mdds" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.636153 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b63354c-ebcc-4f47-895a-dc4361085ce6-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-4rxx6\" (UID: \"6b63354c-ebcc-4f47-895a-dc4361085ce6\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4rxx6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.636176 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/693f1ea3-457d-4233-844f-1125adaa9fa9-registration-dir\") pod \"csi-hostpathplugin-546f6\" (UID: \"693f1ea3-457d-4233-844f-1125adaa9fa9\") " pod="hostpath-provisioner/csi-hostpathplugin-546f6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.636207 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ghhlb\" (UniqueName: \"kubernetes.io/projected/4ce76fa1-2f29-4c5a-b2d6-7869af7b25a8-kube-api-access-ghhlb\") pod \"dns-default-5mdds\" (UID: \"4ce76fa1-2f29-4c5a-b2d6-7869af7b25a8\") " pod="openshift-dns/dns-default-5mdds" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.636216 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/693f1ea3-457d-4233-844f-1125adaa9fa9-mountpoint-dir\") pod \"csi-hostpathplugin-546f6\" (UID: \"693f1ea3-457d-4233-844f-1125adaa9fa9\") " pod="hostpath-provisioner/csi-hostpathplugin-546f6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.636256 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qfdzk\" (UniqueName: \"kubernetes.io/projected/1da6d131-80fb-4fce-9613-5dc3c320889f-kube-api-access-qfdzk\") pod \"cluster-samples-operator-6b564684c8-vmcp7\" (UID: \"1da6d131-80fb-4fce-9613-5dc3c320889f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vmcp7" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.636275 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4ce76fa1-2f29-4c5a-b2d6-7869af7b25a8-tmp-dir\") pod \"dns-default-5mdds\" (UID: \"4ce76fa1-2f29-4c5a-b2d6-7869af7b25a8\") " pod="openshift-dns/dns-default-5mdds" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.636555 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4ce76fa1-2f29-4c5a-b2d6-7869af7b25a8-tmp-dir\") pod \"dns-default-5mdds\" (UID: \"4ce76fa1-2f29-4c5a-b2d6-7869af7b25a8\") " pod="openshift-dns/dns-default-5mdds" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.636825 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6b63354c-ebcc-4f47-895a-dc4361085ce6-images\") pod \"machine-config-operator-67c9d58cbb-4rxx6\" (UID: \"6b63354c-ebcc-4f47-895a-dc4361085ce6\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4rxx6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.636871 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d0a7d391-0b51-463d-a695-b09800b9efba-etcd-client\") pod \"etcd-operator-69b85846b6-nxc72\" (UID: \"d0a7d391-0b51-463d-a695-b09800b9efba\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nxc72" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.636902 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c21fab5e-ccd1-41fb-8fb4-588d0fd9ebb1-config\") pod \"service-ca-operator-5b9c976747-4xcq8\" (UID: \"c21fab5e-ccd1-41fb-8fb4-588d0fd9ebb1\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4xcq8" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.636928 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zlbg9\" (UniqueName: \"kubernetes.io/projected/6b63354c-ebcc-4f47-895a-dc4361085ce6-kube-api-access-zlbg9\") pod \"machine-config-operator-67c9d58cbb-4rxx6\" (UID: \"6b63354c-ebcc-4f47-895a-dc4361085ce6\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4rxx6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.636955 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d0a7d391-0b51-463d-a695-b09800b9efba-etcd-service-ca\") pod \"etcd-operator-69b85846b6-nxc72\" (UID: \"d0a7d391-0b51-463d-a695-b09800b9efba\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nxc72" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.637292 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/59b9b2cd-1da5-4c42-8ec2-fff0e1726882-signing-cabundle\") pod \"service-ca-74545575db-jdfgh\" (UID: \"59b9b2cd-1da5-4c42-8ec2-fff0e1726882\") " pod="openshift-service-ca/service-ca-74545575db-jdfgh" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.637499 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d0a7d391-0b51-463d-a695-b09800b9efba-etcd-service-ca\") pod \"etcd-operator-69b85846b6-nxc72\" (UID: \"d0a7d391-0b51-463d-a695-b09800b9efba\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nxc72" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.637576 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ffee5ff-84cf-4dfa-816b-ca1f8b763069-config-volume\") pod \"collect-profiles-29521440-rcf5s\" (UID: \"2ffee5ff-84cf-4dfa-816b-ca1f8b763069\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-rcf5s" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.646873 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/693f1ea3-457d-4233-844f-1125adaa9fa9-registration-dir\") pod \"csi-hostpathplugin-546f6\" (UID: \"693f1ea3-457d-4233-844f-1125adaa9fa9\") " pod="hostpath-provisioner/csi-hostpathplugin-546f6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.651945 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b3e6e84e-201d-45cb-a34e-351fcc111c55-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-mrr4k\" (UID: \"b3e6e84e-201d-45cb-a34e-351fcc111c55\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mrr4k" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.652572 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0a7d391-0b51-463d-a695-b09800b9efba-config\") pod \"etcd-operator-69b85846b6-nxc72\" (UID: \"d0a7d391-0b51-463d-a695-b09800b9efba\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nxc72" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.653517 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9448c632-b480-4584-be47-2b81207d4346-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-xmqnl\" (UID: \"9448c632-b480-4584-be47-2b81207d4346\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xmqnl" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.655806 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/693f1ea3-457d-4233-844f-1125adaa9fa9-socket-dir\") pod \"csi-hostpathplugin-546f6\" (UID: \"693f1ea3-457d-4233-844f-1125adaa9fa9\") " pod="hostpath-provisioner/csi-hostpathplugin-546f6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.655835 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d0a7d391-0b51-463d-a695-b09800b9efba-etcd-ca\") pod \"etcd-operator-69b85846b6-nxc72\" (UID: \"d0a7d391-0b51-463d-a695-b09800b9efba\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nxc72" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.657475 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9448c632-b480-4584-be47-2b81207d4346-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-xmqnl\" (UID: \"9448c632-b480-4584-be47-2b81207d4346\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xmqnl" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.657716 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a7fdfe97-5098-4468-92b7-881bc4270004-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-8f27m\" (UID: \"a7fdfe97-5098-4468-92b7-881bc4270004\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-8f27m" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.658051 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/693f1ea3-457d-4233-844f-1125adaa9fa9-plugins-dir\") pod \"csi-hostpathplugin-546f6\" (UID: \"693f1ea3-457d-4233-844f-1125adaa9fa9\") " pod="hostpath-provisioner/csi-hostpathplugin-546f6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.658203 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9e0b7021-c99b-4fab-9c19-51affb4ad611-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-2bqqx\" (UID: \"9e0b7021-c99b-4fab-9c19-51affb4ad611\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2bqqx" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.658566 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a7fdfe97-5098-4468-92b7-881bc4270004-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-8f27m\" (UID: \"a7fdfe97-5098-4468-92b7-881bc4270004\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-8f27m" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.658614 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b3e6e84e-201d-45cb-a34e-351fcc111c55-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-mrr4k\" (UID: \"b3e6e84e-201d-45cb-a34e-351fcc111c55\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mrr4k" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.658732 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a7fdfe97-5098-4468-92b7-881bc4270004-tmp\") pod \"marketplace-operator-547dbd544d-8f27m\" (UID: \"a7fdfe97-5098-4468-92b7-881bc4270004\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-8f27m" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.659130 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e0b7021-c99b-4fab-9c19-51affb4ad611-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-2bqqx\" (UID: \"9e0b7021-c99b-4fab-9c19-51affb4ad611\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2bqqx" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.659347 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/99aef4f8-4236-448e-94bb-cca311ff5d9b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-cwkk6\" (UID: \"99aef4f8-4236-448e-94bb-cca311ff5d9b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-cwkk6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.660712 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/59b9b2cd-1da5-4c42-8ec2-fff0e1726882-signing-key\") pod \"service-ca-74545575db-jdfgh\" (UID: \"59b9b2cd-1da5-4c42-8ec2-fff0e1726882\") " pod="openshift-service-ca/service-ca-74545575db-jdfgh" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.661233 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b3e6e84e-201d-45cb-a34e-351fcc111c55-ready\") pod \"cni-sysctl-allowlist-ds-mrr4k\" (UID: \"b3e6e84e-201d-45cb-a34e-351fcc111c55\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mrr4k" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.662994 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0a192868-bc11-40e5-92ff-01df13a30588-tmpfs\") pod \"packageserver-7d4fc7d867-47trg\" (UID: \"0a192868-bc11-40e5-92ff-01df13a30588\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-47trg" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.663731 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ce76fa1-2f29-4c5a-b2d6-7869af7b25a8-config-volume\") pod \"dns-default-5mdds\" (UID: \"4ce76fa1-2f29-4c5a-b2d6-7869af7b25a8\") " pod="openshift-dns/dns-default-5mdds" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.664219 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/03c9c6ff-db85-463c-8af8-5589f2af76f0-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-2tl49\" (UID: \"03c9c6ff-db85-463c-8af8-5589f2af76f0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2tl49" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.665038 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d0a7d391-0b51-463d-a695-b09800b9efba-tmp-dir\") pod \"etcd-operator-69b85846b6-nxc72\" (UID: \"d0a7d391-0b51-463d-a695-b09800b9efba\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nxc72" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.665467 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1da6d131-80fb-4fce-9613-5dc3c320889f-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-vmcp7\" (UID: \"1da6d131-80fb-4fce-9613-5dc3c320889f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vmcp7" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.666058 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2ffee5ff-84cf-4dfa-816b-ca1f8b763069-secret-volume\") pod \"collect-profiles-29521440-rcf5s\" (UID: \"2ffee5ff-84cf-4dfa-816b-ca1f8b763069\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-rcf5s" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.666646 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn89w\" (UniqueName: \"kubernetes.io/projected/fcfae8bf-91d7-48d3-a978-1510fe282c92-kube-api-access-pn89w\") pod \"apiserver-9ddfb9f55-qsvff\" (UID: \"fcfae8bf-91d7-48d3-a978-1510fe282c92\") " pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.667221 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b63354c-ebcc-4f47-895a-dc4361085ce6-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-4rxx6\" (UID: \"6b63354c-ebcc-4f47-895a-dc4361085ce6\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4rxx6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.668363 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a095f37e-9225-494c-908e-66bfa750642b-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-6kxlb\" (UID: \"a095f37e-9225-494c-908e-66bfa750642b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6kxlb" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.671489 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4ce76fa1-2f29-4c5a-b2d6-7869af7b25a8-metrics-tls\") pod \"dns-default-5mdds\" (UID: \"4ce76fa1-2f29-4c5a-b2d6-7869af7b25a8\") " pod="openshift-dns/dns-default-5mdds" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.677386 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d0a7d391-0b51-463d-a695-b09800b9efba-etcd-client\") pod \"etcd-operator-69b85846b6-nxc72\" (UID: \"d0a7d391-0b51-463d-a695-b09800b9efba\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nxc72" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.677567 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0a7d391-0b51-463d-a695-b09800b9efba-serving-cert\") pod \"etcd-operator-69b85846b6-nxc72\" (UID: \"d0a7d391-0b51-463d-a695-b09800b9efba\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nxc72" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.677701 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0a192868-bc11-40e5-92ff-01df13a30588-apiservice-cert\") pod \"packageserver-7d4fc7d867-47trg\" (UID: \"0a192868-bc11-40e5-92ff-01df13a30588\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-47trg" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.679161 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7584ea80-3537-4eda-a17d-6d0ef3c0d7ca-webhook-certs\") pod \"multus-admission-controller-69db94689b-tw52v\" (UID: \"7584ea80-3537-4eda-a17d-6d0ef3c0d7ca\") " pod="openshift-multus/multus-admission-controller-69db94689b-tw52v" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.679726 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd7d724c-de71-4a3f-b43d-1d09799504cb-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-nz44x\" (UID: \"cd7d724c-de71-4a3f-b43d-1d09799504cb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-nz44x" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.680200 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cf6fbed8-60b9-46db-a4e7-efc414eed3c3-tmpfs\") pod \"olm-operator-5cdf44d969-9mrrp\" (UID: \"cf6fbed8-60b9-46db-a4e7-efc414eed3c3\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9mrrp" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.681177 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5254t\" (UniqueName: \"kubernetes.io/projected/a6fd08da-3534-483e-a717-cad005275c5a-kube-api-access-5254t\") pod \"authentication-operator-7f5c659b84-6nhg5\" (UID: \"a6fd08da-3534-483e-a717-cad005275c5a\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6nhg5" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.682508 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd7d724c-de71-4a3f-b43d-1d09799504cb-config\") pod \"kube-storage-version-migrator-operator-565b79b866-nz44x\" (UID: \"cd7d724c-de71-4a3f-b43d-1d09799504cb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-nz44x" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.683228 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4b9beca8-fe19-4c26-81e5-cdd53b27036a-node-bootstrap-token\") pod \"machine-config-server-cbqfq\" (UID: \"4b9beca8-fe19-4c26-81e5-cdd53b27036a\") " pod="openshift-machine-config-operator/machine-config-server-cbqfq" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.683358 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37d92c5c-658a-4b48-bbb5-db94b12d98a1-cert\") pod \"ingress-canary-g5z9b\" (UID: \"37d92c5c-658a-4b48-bbb5-db94b12d98a1\") " pod="openshift-ingress-canary/ingress-canary-g5z9b" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.684044 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c21fab5e-ccd1-41fb-8fb4-588d0fd9ebb1-config\") pod \"service-ca-operator-5b9c976747-4xcq8\" (UID: \"c21fab5e-ccd1-41fb-8fb4-588d0fd9ebb1\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4xcq8" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.684485 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a095f37e-9225-494c-908e-66bfa750642b-tmpfs\") pod \"catalog-operator-75ff9f647d-6kxlb\" (UID: \"a095f37e-9225-494c-908e-66bfa750642b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6kxlb" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.685015 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0a192868-bc11-40e5-92ff-01df13a30588-webhook-cert\") pod \"packageserver-7d4fc7d867-47trg\" (UID: \"0a192868-bc11-40e5-92ff-01df13a30588\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-47trg" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.686502 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjcdm\" (UniqueName: \"kubernetes.io/projected/9c5b02a2-437a-46c3-b4ce-d856b61053f6-kube-api-access-hjcdm\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.687372 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a095f37e-9225-494c-908e-66bfa750642b-srv-cert\") pod \"catalog-operator-75ff9f647d-6kxlb\" (UID: \"a095f37e-9225-494c-908e-66bfa750642b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6kxlb" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.688665 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c21fab5e-ccd1-41fb-8fb4-588d0fd9ebb1-serving-cert\") pod \"service-ca-operator-5b9c976747-4xcq8\" (UID: \"c21fab5e-ccd1-41fb-8fb4-588d0fd9ebb1\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4xcq8" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.691176 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cf6fbed8-60b9-46db-a4e7-efc414eed3c3-profile-collector-cert\") pod \"olm-operator-5cdf44d969-9mrrp\" (UID: \"cf6fbed8-60b9-46db-a4e7-efc414eed3c3\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9mrrp" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.692340 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cf6fbed8-60b9-46db-a4e7-efc414eed3c3-srv-cert\") pod \"olm-operator-5cdf44d969-9mrrp\" (UID: \"cf6fbed8-60b9-46db-a4e7-efc414eed3c3\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9mrrp" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.694179 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4b9beca8-fe19-4c26-81e5-cdd53b27036a-certs\") pod \"machine-config-server-cbqfq\" (UID: \"4b9beca8-fe19-4c26-81e5-cdd53b27036a\") " pod="openshift-machine-config-operator/machine-config-server-cbqfq" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.722653 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4295a9b8-bd4b-4d7c-8499-1c407ff83e5f-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-8dzhp\" (UID: \"4295a9b8-bd4b-4d7c-8499-1c407ff83e5f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-8dzhp" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.736726 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-rgvbj"] Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.739779 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:47 crc kubenswrapper[5109]: E0217 00:10:47.740440 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:48.240414394 +0000 UTC m=+119.571969152 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.740812 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9c5b02a2-437a-46c3-b4ce-d856b61053f6-bound-sa-token\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.766443 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9zg2\" (UniqueName: \"kubernetes.io/projected/4295a9b8-bd4b-4d7c-8499-1c407ff83e5f-kube-api-access-w9zg2\") pod \"cluster-image-registry-operator-86c45576b9-8dzhp\" (UID: \"4295a9b8-bd4b-4d7c-8499-1c407ff83e5f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-8dzhp" Feb 17 00:10:47 crc kubenswrapper[5109]: W0217 00:10:47.773792 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84650701_493a_45a1_abec_a28ecdba6c44.slice/crio-f2e404e9f44c4a8f666b3d7e0a13c983445b60b392c94369a30e8452bbdded4e WatchSource:0}: Error finding container f2e404e9f44c4a8f666b3d7e0a13c983445b60b392c94369a30e8452bbdded4e: Status 404 returned error can't find the container with id f2e404e9f44c4a8f666b3d7e0a13c983445b60b392c94369a30e8452bbdded4e Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.787161 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-8dzhp" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.797156 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-dqtqd" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.799109 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e0b7021-c99b-4fab-9c19-51affb4ad611-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-2bqqx\" (UID: \"9e0b7021-c99b-4fab-9c19-51affb4ad611\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2bqqx" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.805718 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.807878 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9kwl\" (UniqueName: \"kubernetes.io/projected/b3e6e84e-201d-45cb-a34e-351fcc111c55-kube-api-access-t9kwl\") pod \"cni-sysctl-allowlist-ds-mrr4k\" (UID: \"b3e6e84e-201d-45cb-a34e-351fcc111c55\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mrr4k" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.819206 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-mrr4k" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.826308 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6nhg5" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.827547 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgt4f\" (UniqueName: \"kubernetes.io/projected/37d92c5c-658a-4b48-bbb5-db94b12d98a1-kube-api-access-wgt4f\") pod \"ingress-canary-g5z9b\" (UID: \"37d92c5c-658a-4b48-bbb5-db94b12d98a1\") " pod="openshift-ingress-canary/ingress-canary-g5z9b" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.834037 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.841609 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fp844" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.842070 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.842101 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6cgl\" (UniqueName: \"kubernetes.io/projected/a7fdfe97-5098-4468-92b7-881bc4270004-kube-api-access-v6cgl\") pod \"marketplace-operator-547dbd544d-8f27m\" (UID: \"a7fdfe97-5098-4468-92b7-881bc4270004\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-8f27m" Feb 17 00:10:47 crc kubenswrapper[5109]: E0217 00:10:47.842455 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:48.342440953 +0000 UTC m=+119.673995711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.858002 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vhpw4" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.861695 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxddw\" (UniqueName: \"kubernetes.io/projected/693f1ea3-457d-4233-844f-1125adaa9fa9-kube-api-access-dxddw\") pod \"csi-hostpathplugin-546f6\" (UID: \"693f1ea3-457d-4233-844f-1125adaa9fa9\") " pod="hostpath-provisioner/csi-hostpathplugin-546f6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.894017 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr25v\" (UniqueName: \"kubernetes.io/projected/4b9beca8-fe19-4c26-81e5-cdd53b27036a-kube-api-access-cr25v\") pod \"machine-config-server-cbqfq\" (UID: \"4b9beca8-fe19-4c26-81e5-cdd53b27036a\") " pod="openshift-machine-config-operator/machine-config-server-cbqfq" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.920408 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghhlb\" (UniqueName: \"kubernetes.io/projected/4ce76fa1-2f29-4c5a-b2d6-7869af7b25a8-kube-api-access-ghhlb\") pod \"dns-default-5mdds\" (UID: \"4ce76fa1-2f29-4c5a-b2d6-7869af7b25a8\") " pod="openshift-dns/dns-default-5mdds" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.927976 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pjj8\" (UniqueName: \"kubernetes.io/projected/d0a7d391-0b51-463d-a695-b09800b9efba-kube-api-access-6pjj8\") pod \"etcd-operator-69b85846b6-nxc72\" (UID: \"d0a7d391-0b51-463d-a695-b09800b9efba\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nxc72" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.932951 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-nxc72" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.944310 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:47 crc kubenswrapper[5109]: E0217 00:10:47.944538 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:48.444503854 +0000 UTC m=+119.776058612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.946193 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldl8r\" (UniqueName: \"kubernetes.io/projected/0a192868-bc11-40e5-92ff-01df13a30588-kube-api-access-ldl8r\") pod \"packageserver-7d4fc7d867-47trg\" (UID: \"0a192868-bc11-40e5-92ff-01df13a30588\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-47trg" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.947323 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:47 crc kubenswrapper[5109]: E0217 00:10:47.949395 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:48.449370082 +0000 UTC m=+119.780924840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.966212 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrwgd\" (UniqueName: \"kubernetes.io/projected/99aef4f8-4236-448e-94bb-cca311ff5d9b-kube-api-access-xrwgd\") pod \"control-plane-machine-set-operator-75ffdb6fcd-cwkk6\" (UID: \"99aef4f8-4236-448e-94bb-cca311ff5d9b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-cwkk6" Feb 17 00:10:47 crc kubenswrapper[5109]: I0217 00:10:47.998348 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6b8n\" (UniqueName: \"kubernetes.io/projected/e3bc1644-c40b-4971-9b76-c7c63a334ed3-kube-api-access-g6b8n\") pod \"migrator-866fcbc849-mlw2f\" (UID: \"e3bc1644-c40b-4971-9b76-c7c63a334ed3\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-mlw2f" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.010104 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-8f27m" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.024249 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g4bz\" (UniqueName: \"kubernetes.io/projected/9448c632-b480-4584-be47-2b81207d4346-kube-api-access-5g4bz\") pod \"machine-config-controller-f9cdd68f7-xmqnl\" (UID: \"9448c632-b480-4584-be47-2b81207d4346\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xmqnl" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.036193 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.046242 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v65w5\" (UniqueName: \"kubernetes.io/projected/2ffee5ff-84cf-4dfa-816b-ca1f8b763069-kube-api-access-v65w5\") pod \"collect-profiles-29521440-rcf5s\" (UID: \"2ffee5ff-84cf-4dfa-816b-ca1f8b763069\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-rcf5s" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.046358 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5mdds" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.053393 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7kdd\" (UniqueName: \"kubernetes.io/projected/03c9c6ff-db85-463c-8af8-5589f2af76f0-kube-api-access-b7kdd\") pod \"package-server-manager-77f986bd66-2tl49\" (UID: \"03c9c6ff-db85-463c-8af8-5589f2af76f0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2tl49" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.054915 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:48 crc kubenswrapper[5109]: E0217 00:10:48.055456 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:48.555421848 +0000 UTC m=+119.886976606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.055825 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.056009 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-cbqfq" Feb 17 00:10:48 crc kubenswrapper[5109]: E0217 00:10:48.057046 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:48.55703624 +0000 UTC m=+119.888590998 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.071843 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-g5z9b" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.082491 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8w6p\" (UniqueName: \"kubernetes.io/projected/c21fab5e-ccd1-41fb-8fb4-588d0fd9ebb1-kube-api-access-n8w6p\") pod \"service-ca-operator-5b9c976747-4xcq8\" (UID: \"c21fab5e-ccd1-41fb-8fb4-588d0fd9ebb1\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4xcq8" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.085924 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvwvd\" (UniqueName: \"kubernetes.io/projected/7584ea80-3537-4eda-a17d-6d0ef3c0d7ca-kube-api-access-xvwvd\") pod \"multus-admission-controller-69db94689b-tw52v\" (UID: \"7584ea80-3537-4eda-a17d-6d0ef3c0d7ca\") " pod="openshift-multus/multus-admission-controller-69db94689b-tw52v" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.088449 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-546f6" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.110248 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjjvg\" (UniqueName: \"kubernetes.io/projected/a095f37e-9225-494c-908e-66bfa750642b-kube-api-access-hjjvg\") pod \"catalog-operator-75ff9f647d-6kxlb\" (UID: \"a095f37e-9225-494c-908e-66bfa750642b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6kxlb" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.128684 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlbg9\" (UniqueName: \"kubernetes.io/projected/6b63354c-ebcc-4f47-895a-dc4361085ce6-kube-api-access-zlbg9\") pod \"machine-config-operator-67c9d58cbb-4rxx6\" (UID: \"6b63354c-ebcc-4f47-895a-dc4361085ce6\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4rxx6" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.132746 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" event={"ID":"a4d85031-8c4b-4260-9279-77b7e3a7d75d","Type":"ContainerStarted","Data":"25ab13cc5b9d7ca0e9d2fdc4c40ee97444716db85a8574795f082837d03e6d7b"} Feb 17 00:10:48 crc kubenswrapper[5109]: W0217 00:10:48.139942 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b9beca8_fe19_4c26_81e5_cdd53b27036a.slice/crio-26cf2364eaa4a143002b8206de1eaadaec846b16290ec23a42d045b7f51a961c WatchSource:0}: Error finding container 26cf2364eaa4a143002b8206de1eaadaec846b16290ec23a42d045b7f51a961c: Status 404 returned error can't find the container with id 26cf2364eaa4a143002b8206de1eaadaec846b16290ec23a42d045b7f51a961c Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.140271 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-5pqmv" event={"ID":"abd1baa1-4b4c-459b-b487-5dd283fe0ad9","Type":"ContainerStarted","Data":"ffa8ced56288b6dee80806a4999694e8e5617bbdb2dcb7be04e57fe82ddce149"} Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.142491 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-5pqmv" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.152066 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb5bb\" (UniqueName: \"kubernetes.io/projected/59b9b2cd-1da5-4c42-8ec2-fff0e1726882-kube-api-access-lb5bb\") pod \"service-ca-74545575db-jdfgh\" (UID: \"59b9b2cd-1da5-4c42-8ec2-fff0e1726882\") " pod="openshift-service-ca/service-ca-74545575db-jdfgh" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.158321 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-mrr4k" event={"ID":"b3e6e84e-201d-45cb-a34e-351fcc111c55","Type":"ContainerStarted","Data":"85a0374b7e35458fdcac8e7d8152ba22b4bc690809319c1e206e841e6a6ba50e"} Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.159666 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:48 crc kubenswrapper[5109]: E0217 00:10:48.161164 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:48.661144105 +0000 UTC m=+119.992698863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.163251 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-rgvbj" event={"ID":"84650701-493a-45a1-abec-a28ecdba6c44","Type":"ContainerStarted","Data":"f2e404e9f44c4a8f666b3d7e0a13c983445b60b392c94369a30e8452bbdded4e"} Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.163408 5109 patch_prober.go:28] interesting pod/downloads-747b44746d-5pqmv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.163476 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-5pqmv" podUID="abd1baa1-4b4c-459b-b487-5dd283fe0ad9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.169516 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jvfp\" (UniqueName: \"kubernetes.io/projected/cf6fbed8-60b9-46db-a4e7-efc414eed3c3-kube-api-access-8jvfp\") pod \"olm-operator-5cdf44d969-9mrrp\" (UID: \"cf6fbed8-60b9-46db-a4e7-efc414eed3c3\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9mrrp" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.182352 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-rcf5s" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.187900 5109 generic.go:358] "Generic (PLEG): container finished" podID="bf8b5e00-d02f-4e7f-a49c-b0304f07410b" containerID="99e21ca6ede4cc378800a9b9178eef8e7f8ca1aa94e4a405c06feb2996bc820e" exitCode=0 Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.188058 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" event={"ID":"bf8b5e00-d02f-4e7f-a49c-b0304f07410b","Type":"ContainerDied","Data":"99e21ca6ede4cc378800a9b9178eef8e7f8ca1aa94e4a405c06feb2996bc820e"} Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.188100 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" event={"ID":"bf8b5e00-d02f-4e7f-a49c-b0304f07410b","Type":"ContainerStarted","Data":"aa685ba3258a33083fd6798f960f9296176f42eea65b80d0c052e269571e35ae"} Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.194745 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-47trg" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.197174 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfdzk\" (UniqueName: \"kubernetes.io/projected/1da6d131-80fb-4fce-9613-5dc3c320889f-kube-api-access-qfdzk\") pod \"cluster-samples-operator-6b564684c8-vmcp7\" (UID: \"1da6d131-80fb-4fce-9613-5dc3c320889f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vmcp7" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.198033 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-r8nwv" event={"ID":"f19c89c8-8db7-461b-bf1f-61133b64a2da","Type":"ContainerStarted","Data":"41942d56c832f013ddb8b9c5a1c0321a5e20d5bc80b20bb9c41b02bf585fbff7"} Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.199201 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-r8nwv" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.200388 5109 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-r8nwv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.200427 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-r8nwv" podUID="f19c89c8-8db7-461b-bf1f-61133b64a2da" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.201950 5109 generic.go:358] "Generic (PLEG): container finished" podID="d1ec3e6e-d123-47dd-bd2f-63d924f5129e" containerID="aa706857b77a8b5bd8a7040a14e99b4ec8664d0486d81919829cda63037e44e4" exitCode=0 Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.202044 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-l5t5g" event={"ID":"d1ec3e6e-d123-47dd-bd2f-63d924f5129e","Type":"ContainerDied","Data":"aa706857b77a8b5bd8a7040a14e99b4ec8664d0486d81919829cda63037e44e4"} Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.202082 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-l5t5g" event={"ID":"d1ec3e6e-d123-47dd-bd2f-63d924f5129e","Type":"ContainerStarted","Data":"4a96407ca2dfae91bb87af5858ac1e191e61961755bfc1e3edae2fcee05630d4"} Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.212293 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwmrb\" (UniqueName: \"kubernetes.io/projected/cd7d724c-de71-4a3f-b43d-1d09799504cb-kube-api-access-wwmrb\") pod \"kube-storage-version-migrator-operator-565b79b866-nz44x\" (UID: \"cd7d724c-de71-4a3f-b43d-1d09799504cb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-nz44x" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.212467 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-7l95k" event={"ID":"2f13cd6d-3c3b-4ed8-b692-cfe56a634a19","Type":"ContainerStarted","Data":"b8aa4bb48cef4ceb14a634a356cf6734770126d004c91f97c86b52a16ce88200"} Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.212498 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-7l95k" event={"ID":"2f13cd6d-3c3b-4ed8-b692-cfe56a634a19","Type":"ContainerStarted","Data":"7ba345c674b19b14f531b0e068ea91b2b8c25edb99a00e950f7511da3e4d54d7"} Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.212507 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-7l95k" event={"ID":"2f13cd6d-3c3b-4ed8-b692-cfe56a634a19","Type":"ContainerStarted","Data":"afab2698cc1ce6eb91d286917f4547520dbbb6255ec33ea77e1f42c68075a761"} Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.217742 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29521440-n967f" event={"ID":"7a08715e-e52f-4251-9b13-72f93eacb031","Type":"ContainerStarted","Data":"798b9f619c469cdd0e5a1e7c549ab008f123cf3a545ebe74fa7bdf25032188f5"} Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.217783 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29521440-n967f" event={"ID":"7a08715e-e52f-4251-9b13-72f93eacb031","Type":"ContainerStarted","Data":"63cef07e628c2987560bd0d73c3d34f12066effddf1ca9d0fc458f21b8b28fda"} Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.222834 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mhlc4" event={"ID":"b4236f2e-adff-48cd-ad0c-f95a2871ef5b","Type":"ContainerStarted","Data":"795ce8cf9ef7a97ddd37ed53d0547bddacf752dbf7d32b7ba4dbe1f2e0b6e15e"} Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.222878 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mhlc4" event={"ID":"b4236f2e-adff-48cd-ad0c-f95a2871ef5b","Type":"ContainerStarted","Data":"b071659d8bca78c25c1bacdc36399d10b055eedcff981f3c1bc18e54d99ecd53"} Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.238833 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6kxlb" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.248846 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-cwkk6" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.249232 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb2dv\" (UniqueName: \"kubernetes.io/projected/9e0b7021-c99b-4fab-9c19-51affb4ad611-kube-api-access-qb2dv\") pod \"ingress-operator-6b9cb4dbcf-2bqqx\" (UID: \"9e0b7021-c99b-4fab-9c19-51affb4ad611\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2bqqx" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.256152 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-mlw2f" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.261378 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:48 crc kubenswrapper[5109]: E0217 00:10:48.261823 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:48.761795159 +0000 UTC m=+120.093349917 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.280905 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jxv5" event={"ID":"f199b766-a6b0-42f9-9fd7-a618ba099c59","Type":"ContainerStarted","Data":"aefe362acd2cf61810d19161b52477f0f7e12d8da6f026772fd5346926057ea6"} Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.287584 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xmqnl" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.288294 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-jdfgh" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.288703 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-tw52v" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.290665 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4xcq8" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.301499 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-f4gw4" event={"ID":"74763348-8544-4540-85b9-d85677c7c733","Type":"ContainerStarted","Data":"1ad0ac7f94250cd6abc5632c2609ea30533f1abb6735a4e3359cc66689ac765a"} Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.301535 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-f4gw4" event={"ID":"74763348-8544-4540-85b9-d85677c7c733","Type":"ContainerStarted","Data":"a812f99fb3351a903eb2aa5725f34471d570896ee63dc9263c41bd264cf4d5fb"} Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.302410 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9mrrp" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.321528 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2tl49" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.348974 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4rxx6" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.364554 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:48 crc kubenswrapper[5109]: E0217 00:10:48.365915 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:48.865899924 +0000 UTC m=+120.197454682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.468537 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vmcp7" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.474358 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2bqqx" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.477901 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.479425 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-dqtqd"] Feb 17 00:10:48 crc kubenswrapper[5109]: E0217 00:10:48.479489 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:48.979454807 +0000 UTC m=+120.311009565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.495522 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-8dzhp"] Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.511118 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-nz44x" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.536248 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-qsvff"] Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.587892 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:48 crc kubenswrapper[5109]: E0217 00:10:48.588311 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:49.088293186 +0000 UTC m=+120.419847944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.640678 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mhlc4" podStartSLOduration=97.640659992 podStartE2EDuration="1m37.640659992s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:48.64023086 +0000 UTC m=+119.971785618" watchObservedRunningTime="2026-02-17 00:10:48.640659992 +0000 UTC m=+119.972214750" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.691280 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:48 crc kubenswrapper[5109]: E0217 00:10:48.692122 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:49.192100773 +0000 UTC m=+120.523655531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.792612 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:48 crc kubenswrapper[5109]: E0217 00:10:48.793390 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:49.293371733 +0000 UTC m=+120.624926491 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.900756 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:48 crc kubenswrapper[5109]: E0217 00:10:48.901350 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:49.401332369 +0000 UTC m=+120.732887127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.914997 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" podStartSLOduration=97.914978118 podStartE2EDuration="1m37.914978118s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:48.867135631 +0000 UTC m=+120.198690409" watchObservedRunningTime="2026-02-17 00:10:48.914978118 +0000 UTC m=+120.246532876" Feb 17 00:10:48 crc kubenswrapper[5109]: I0217 00:10:48.992932 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fp844"] Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.005118 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:49 crc kubenswrapper[5109]: E0217 00:10:49.005583 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:49.505546837 +0000 UTC m=+120.837101595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.080479 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vhpw4"] Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.088496 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-6nhg5"] Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.108502 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:49 crc kubenswrapper[5109]: E0217 00:10:49.108922 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:49.608901252 +0000 UTC m=+120.940456070 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.173772 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-nxc72"] Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.186415 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-546f6"] Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.210055 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:49 crc kubenswrapper[5109]: E0217 00:10:49.210825 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:49.710803629 +0000 UTC m=+121.042358387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.236382 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn" podStartSLOduration=98.2363669 podStartE2EDuration="1m38.2363669s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:49.203088666 +0000 UTC m=+120.534643424" watchObservedRunningTime="2026-02-17 00:10:49.2363669 +0000 UTC m=+120.567921658" Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.285772 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29521440-n967f" podStartSLOduration=99.285757658 podStartE2EDuration="1m39.285757658s" podCreationTimestamp="2026-02-17 00:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:49.236882974 +0000 UTC m=+120.568437722" watchObservedRunningTime="2026-02-17 00:10:49.285757658 +0000 UTC m=+120.617312416" Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.311532 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:49 crc kubenswrapper[5109]: E0217 00:10:49.312003 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:49.811988187 +0000 UTC m=+121.143542945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.415235 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-mrr4k" event={"ID":"b3e6e84e-201d-45cb-a34e-351fcc111c55","Type":"ContainerStarted","Data":"56eba437445dd741d363ddbbafd1a629d5ff6bb99cbd659a0ce57cebe6b0aec6"} Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.419494 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:49 crc kubenswrapper[5109]: E0217 00:10:49.420084 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:49.920065006 +0000 UTC m=+121.251619764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.438109 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-rgvbj" event={"ID":"84650701-493a-45a1-abec-a28ecdba6c44","Type":"ContainerStarted","Data":"d393c23d10f3214f5c24b5357516dabba9a53f4d39df1ccc610650f10703e257"} Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.442836 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-mrr4k" Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.449364 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-cbqfq" event={"ID":"4b9beca8-fe19-4c26-81e5-cdd53b27036a","Type":"ContainerStarted","Data":"38dc4f27f5976813e30bc0e7fcd45c5b6496257677cbeaa19471d92f2c93df64"} Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.449417 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-cbqfq" event={"ID":"4b9beca8-fe19-4c26-81e5-cdd53b27036a","Type":"ContainerStarted","Data":"26cf2364eaa4a143002b8206de1eaadaec846b16290ec23a42d045b7f51a961c"} Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.452189 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6nhg5" event={"ID":"a6fd08da-3534-483e-a717-cad005275c5a","Type":"ContainerStarted","Data":"3e22be496f17d4faf3c1c083567202b9264b710fe60191e18b24cd9142d905dc"} Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.453349 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-8dzhp" event={"ID":"4295a9b8-bd4b-4d7c-8499-1c407ff83e5f","Type":"ContainerStarted","Data":"5affdbc0e579efa846e63f260f4f85dc6e064d32c3b7ad57574d059d0c1190e8"} Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.455509 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-rgvbj" Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.459937 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-546f6" event={"ID":"693f1ea3-457d-4233-844f-1125adaa9fa9","Type":"ContainerStarted","Data":"e1799824aedecf1839d056fca50671910005e47c7713af4868f45016a5e5a67d"} Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.459874 5109 patch_prober.go:28] interesting pod/console-operator-67c89758df-rgvbj container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.460441 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-rgvbj" podUID="84650701-493a-45a1-abec-a28ecdba6c44" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.462421 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-l5t5g" event={"ID":"d1ec3e6e-d123-47dd-bd2f-63d924f5129e","Type":"ContainerStarted","Data":"e869bbbfb40ab5bc309a74081e3ad3baac2c1e8ba997574683c3c65a74e0d834"} Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.525372 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:49 crc kubenswrapper[5109]: E0217 00:10:49.528180 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:50.028163315 +0000 UTC m=+121.359718073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.531971 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-r8nwv" podStartSLOduration=98.531954905 podStartE2EDuration="1m38.531954905s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:49.530386444 +0000 UTC m=+120.861941202" watchObservedRunningTime="2026-02-17 00:10:49.531954905 +0000 UTC m=+120.863509663" Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.532366 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-l5t5g" Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.532387 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fp844" event={"ID":"b4e260ed-b1aa-426b-b93a-0b15c08ca7ed","Type":"ContainerStarted","Data":"11c1ef7e407edb7906a12b5b28b9bc41d9d3bcba2e29d620cc5e46b4bcae0778"} Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.532402 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-dqtqd" event={"ID":"6335485b-ac6f-4574-9590-d11aee2f8cf5","Type":"ContainerStarted","Data":"720334e5840097460db59c5a70ec3b4681f43f98c6f604e7a34121745aee1033"} Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.532443 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-mrr4k" Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.532456 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-g5z9b"] Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.532468 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-f4gw4" event={"ID":"74763348-8544-4540-85b9-d85677c7c733","Type":"ContainerStarted","Data":"e39090dbdbaaead0b8fdc877b03c24162abc70488d820daa54693fff76ebb330"} Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.532477 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vhpw4" event={"ID":"9136f9dd-6527-4547-8085-2bc46041383b","Type":"ContainerStarted","Data":"9059bfc57bd89adb61afb4bc0ffe5304bb2638b37c3f522bedec6b66bb0e22b1"} Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.532489 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" event={"ID":"a4d85031-8c4b-4260-9279-77b7e3a7d75d","Type":"ContainerStarted","Data":"38c653a1726fd9f5e586955190b9b8850456f3cbb23bd522d86a27d03ce064e6"} Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.532498 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-nxc72" event={"ID":"d0a7d391-0b51-463d-a695-b09800b9efba","Type":"ContainerStarted","Data":"8febf2dbf40447af5fc0d40b37b369ce679dbf3d82a88c9056b31e92bae1459b"} Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.533919 5109 patch_prober.go:28] interesting pod/downloads-747b44746d-5pqmv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.534303 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" event={"ID":"fcfae8bf-91d7-48d3-a978-1510fe282c92","Type":"ContainerStarted","Data":"4bb0f6892a92d6b9e2248f317e5a9efa5b1a3b35c10e3336d4af841923c52431"} Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.534881 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-5pqmv" podUID="abd1baa1-4b4c-459b-b487-5dd283fe0ad9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.537048 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5mdds"] Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.560269 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-xhskt" podStartSLOduration=98.560246618 podStartE2EDuration="1m38.560246618s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:49.55802211 +0000 UTC m=+120.889576868" watchObservedRunningTime="2026-02-17 00:10:49.560246618 +0000 UTC m=+120.891801376" Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.599678 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-5jxv5" podStartSLOduration=98.599656734 podStartE2EDuration="1m38.599656734s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:49.599026207 +0000 UTC m=+120.930580975" watchObservedRunningTime="2026-02-17 00:10:49.599656734 +0000 UTC m=+120.931211492" Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.628926 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:49 crc kubenswrapper[5109]: E0217 00:10:49.629172 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:50.129146768 +0000 UTC m=+121.460701526 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.630093 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:49 crc kubenswrapper[5109]: E0217 00:10:49.632984 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:50.132941658 +0000 UTC m=+121.464496416 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.731710 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:49 crc kubenswrapper[5109]: E0217 00:10:49.732096 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:50.232071862 +0000 UTC m=+121.563626620 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.801868 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-glh8p" podStartSLOduration=98.801848665 podStartE2EDuration="1m38.801848665s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:49.757422398 +0000 UTC m=+121.088977166" watchObservedRunningTime="2026-02-17 00:10:49.801848665 +0000 UTC m=+121.133403423" Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.808304 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.808422 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-r8nwv" Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.819642 5109 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-rw5p4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 00:10:49 crc kubenswrapper[5109]: [-]has-synced failed: reason withheld Feb 17 00:10:49 crc kubenswrapper[5109]: [+]process-running ok Feb 17 00:10:49 crc kubenswrapper[5109]: healthz check failed Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.821502 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" podUID="a4d85031-8c4b-4260-9279-77b7e3a7d75d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.838646 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:49 crc kubenswrapper[5109]: E0217 00:10:49.839275 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:50.339259798 +0000 UTC m=+121.670814556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.920409 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-8f27m"] Feb 17 00:10:49 crc kubenswrapper[5109]: E0217 00:10:49.966658 5109 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcfae8bf_91d7_48d3_a978_1510fe282c92.slice/crio-conmon-504f6d52bccdfdf7124ab1ca39aad9207a6e2b1d13c90c68a4129a240f30faab.scope\": RecentStats: unable to find data in memory cache]" Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.968774 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:49 crc kubenswrapper[5109]: E0217 00:10:49.968937 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:50.468903863 +0000 UTC m=+121.800465631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:49 crc kubenswrapper[5109]: I0217 00:10:49.969278 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:49 crc kubenswrapper[5109]: E0217 00:10:49.969972 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:50.469950751 +0000 UTC m=+121.801505509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.008615 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-7l95k" podStartSLOduration=99.008583475 podStartE2EDuration="1m39.008583475s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:50.008291938 +0000 UTC m=+121.339846696" watchObservedRunningTime="2026-02-17 00:10:50.008583475 +0000 UTC m=+121.340138233" Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.036524 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-5pqmv" podStartSLOduration=99.036505319 podStartE2EDuration="1m39.036505319s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:50.034007973 +0000 UTC m=+121.365562731" watchObservedRunningTime="2026-02-17 00:10:50.036505319 +0000 UTC m=+121.368060077" Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.070479 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:50 crc kubenswrapper[5109]: E0217 00:10:50.070758 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:50.570742488 +0000 UTC m=+121.902297246 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.147605 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4rxx6"] Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.159668 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2bqqx"] Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.160499 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-tw52v"] Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.178306 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:50 crc kubenswrapper[5109]: E0217 00:10:50.178676 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:50.678664573 +0000 UTC m=+122.010219331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:50 crc kubenswrapper[5109]: W0217 00:10:50.195569 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b63354c_ebcc_4f47_895a_dc4361085ce6.slice/crio-ae7c945a0fcc81afdf875b5f34f0d234b6e32d6548ddbdfd884dfb8e99d8292c WatchSource:0}: Error finding container ae7c945a0fcc81afdf875b5f34f0d234b6e32d6548ddbdfd884dfb8e99d8292c: Status 404 returned error can't find the container with id ae7c945a0fcc81afdf875b5f34f0d234b6e32d6548ddbdfd884dfb8e99d8292c Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.203648 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9mrrp"] Feb 17 00:10:50 crc kubenswrapper[5109]: W0217 00:10:50.220760 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf6fbed8_60b9_46db_a4e7_efc414eed3c3.slice/crio-dd05d38aa855beefe63a07ee676ca9df7a1a8a5a662adbfe99c7681e95e3e164 WatchSource:0}: Error finding container dd05d38aa855beefe63a07ee676ca9df7a1a8a5a662adbfe99c7681e95e3e164: Status 404 returned error can't find the container with id dd05d38aa855beefe63a07ee676ca9df7a1a8a5a662adbfe99c7681e95e3e164 Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.229964 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-cwkk6"] Feb 17 00:10:50 crc kubenswrapper[5109]: W0217 00:10:50.247883 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda095f37e_9225_494c_908e_66bfa750642b.slice/crio-142d37f1fc41a68c197a784c62e7b3f1640fe76ac6b742e96cbee9953a15844d WatchSource:0}: Error finding container 142d37f1fc41a68c197a784c62e7b3f1640fe76ac6b742e96cbee9953a15844d: Status 404 returned error can't find the container with id 142d37f1fc41a68c197a784c62e7b3f1640fe76ac6b742e96cbee9953a15844d Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.257246 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6kxlb"] Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.262163 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-mlw2f"] Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.280367 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:50 crc kubenswrapper[5109]: E0217 00:10:50.280683 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:50.780666093 +0000 UTC m=+122.112220851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:50 crc kubenswrapper[5109]: W0217 00:10:50.290958 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a192868_bc11_40e5_92ff_01df13a30588.slice/crio-8e5451b873779cef4eeade2d5c8ee2b864c9bfc8fcab67958a696aa63a3cb153 WatchSource:0}: Error finding container 8e5451b873779cef4eeade2d5c8ee2b864c9bfc8fcab67958a696aa63a3cb153: Status 404 returned error can't find the container with id 8e5451b873779cef4eeade2d5c8ee2b864c9bfc8fcab67958a696aa63a3cb153 Feb 17 00:10:50 crc kubenswrapper[5109]: W0217 00:10:50.291499 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3bc1644_c40b_4971_9b76_c7c63a334ed3.slice/crio-ed178dd45274f5493674a096d41f0c1a45eebf37e3d019f177d609d2c9498746 WatchSource:0}: Error finding container ed178dd45274f5493674a096d41f0c1a45eebf37e3d019f177d609d2c9498746: Status 404 returned error can't find the container with id ed178dd45274f5493674a096d41f0c1a45eebf37e3d019f177d609d2c9498746 Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.294066 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2tl49"] Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.306292 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-47trg"] Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.308633 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521440-rcf5s"] Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.343769 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-rgvbj" podStartSLOduration=99.34375138 podStartE2EDuration="1m39.34375138s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:50.322137682 +0000 UTC m=+121.653692460" watchObservedRunningTime="2026-02-17 00:10:50.34375138 +0000 UTC m=+121.675306138" Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.381474 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:50 crc kubenswrapper[5109]: E0217 00:10:50.381774 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:50.881762448 +0000 UTC m=+122.213317206 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.401409 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-4xcq8"] Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.414581 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-nz44x"] Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.423418 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xmqnl"] Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.437818 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-jdfgh"] Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.455561 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vmcp7"] Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.482354 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:50 crc kubenswrapper[5109]: E0217 00:10:50.482478 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:50.982453504 +0000 UTC m=+122.314008262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.482974 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" podStartSLOduration=99.482951447 podStartE2EDuration="1m39.482951447s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:50.481032196 +0000 UTC m=+121.812586954" watchObservedRunningTime="2026-02-17 00:10:50.482951447 +0000 UTC m=+121.814506215" Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.484142 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:50 crc kubenswrapper[5109]: E0217 00:10:50.484733 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:50.984716073 +0000 UTC m=+122.316270831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:50 crc kubenswrapper[5109]: W0217 00:10:50.492020 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9448c632_b480_4584_be47_2b81207d4346.slice/crio-630b88e3f58863840cc490ca06538beeac962b43ccd9ed11b791268ec76cd7ad WatchSource:0}: Error finding container 630b88e3f58863840cc490ca06538beeac962b43ccd9ed11b791268ec76cd7ad: Status 404 returned error can't find the container with id 630b88e3f58863840cc490ca06538beeac962b43ccd9ed11b791268ec76cd7ad Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.519499 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-f4gw4" podStartSLOduration=100.519480236 podStartE2EDuration="1m40.519480236s" podCreationTimestamp="2026-02-17 00:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:50.51733793 +0000 UTC m=+121.848892708" watchObservedRunningTime="2026-02-17 00:10:50.519480236 +0000 UTC m=+121.851034994" Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.553976 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fp844" event={"ID":"b4e260ed-b1aa-426b-b93a-0b15c08ca7ed","Type":"ContainerStarted","Data":"0dbe648673582ce8cfc092fcd6d1cd7fedf81f3804b65598b7ed5f6d31e8576b"} Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.561299 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xmqnl" event={"ID":"9448c632-b480-4584-be47-2b81207d4346","Type":"ContainerStarted","Data":"630b88e3f58863840cc490ca06538beeac962b43ccd9ed11b791268ec76cd7ad"} Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.566525 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4rxx6" event={"ID":"6b63354c-ebcc-4f47-895a-dc4361085ce6","Type":"ContainerStarted","Data":"ae7c945a0fcc81afdf875b5f34f0d234b6e32d6548ddbdfd884dfb8e99d8292c"} Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.571727 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-47trg" event={"ID":"0a192868-bc11-40e5-92ff-01df13a30588","Type":"ContainerStarted","Data":"8e5451b873779cef4eeade2d5c8ee2b864c9bfc8fcab67958a696aa63a3cb153"} Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.572112 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-l5t5g" podStartSLOduration=100.572095118 podStartE2EDuration="1m40.572095118s" podCreationTimestamp="2026-02-17 00:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:50.567950339 +0000 UTC m=+121.899505107" watchObservedRunningTime="2026-02-17 00:10:50.572095118 +0000 UTC m=+121.903649876" Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.581347 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6kxlb" event={"ID":"a095f37e-9225-494c-908e-66bfa750642b","Type":"ContainerStarted","Data":"142d37f1fc41a68c197a784c62e7b3f1640fe76ac6b742e96cbee9953a15844d"} Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.586136 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:50 crc kubenswrapper[5109]: E0217 00:10:50.586626 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:51.086584639 +0000 UTC m=+122.418139397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.596547 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-rcf5s" event={"ID":"2ffee5ff-84cf-4dfa-816b-ca1f8b763069","Type":"ContainerStarted","Data":"a46c747b4badb168efbe19a0be2f80a582cc7689b9ee0ccec9eed4167a5f6710"} Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.604392 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-nxc72" event={"ID":"d0a7d391-0b51-463d-a695-b09800b9efba","Type":"ContainerStarted","Data":"f82f649eb143237f9eac2bb025737e58ec195b0e53e463bbad1cd216f75adf84"} Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.608976 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-nz44x" event={"ID":"cd7d724c-de71-4a3f-b43d-1d09799504cb","Type":"ContainerStarted","Data":"5d8ec9982f727510ed0ead3ac677ece5f448c87dfc23af0d2b8d4dcfd5b085fb"} Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.650561 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9mrrp" event={"ID":"cf6fbed8-60b9-46db-a4e7-efc414eed3c3","Type":"ContainerStarted","Data":"dd05d38aa855beefe63a07ee676ca9df7a1a8a5a662adbfe99c7681e95e3e164"} Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.669427 5109 generic.go:358] "Generic (PLEG): container finished" podID="fcfae8bf-91d7-48d3-a978-1510fe282c92" containerID="504f6d52bccdfdf7124ab1ca39aad9207a6e2b1d13c90c68a4129a240f30faab" exitCode=0 Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.669779 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" event={"ID":"fcfae8bf-91d7-48d3-a978-1510fe282c92","Type":"ContainerDied","Data":"504f6d52bccdfdf7124ab1ca39aad9207a6e2b1d13c90c68a4129a240f30faab"} Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.688671 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-cbqfq" podStartSLOduration=6.68865423 podStartE2EDuration="6.68865423s" podCreationTimestamp="2026-02-17 00:10:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:50.687916991 +0000 UTC m=+122.019471749" watchObservedRunningTime="2026-02-17 00:10:50.68865423 +0000 UTC m=+122.020208988" Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.689757 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:50 crc kubenswrapper[5109]: E0217 00:10:50.691338 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:51.19132192 +0000 UTC m=+122.522876728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.692545 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" event={"ID":"bf8b5e00-d02f-4e7f-a49c-b0304f07410b","Type":"ContainerStarted","Data":"e3a6fc6bb711afc6a106a01a4a816467c92a63dbf8de85084b203c10f45c34a3"} Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.716289 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-8dzhp" event={"ID":"4295a9b8-bd4b-4d7c-8499-1c407ff83e5f","Type":"ContainerStarted","Data":"b41c24364bd5035c04f7f68c91d2ab6cfb3bb6aedce656f6a0b590998c88d6e8"} Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.722163 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-mrr4k" podStartSLOduration=6.722123709 podStartE2EDuration="6.722123709s" podCreationTimestamp="2026-02-17 00:10:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:50.72024726 +0000 UTC m=+122.051802018" watchObservedRunningTime="2026-02-17 00:10:50.722123709 +0000 UTC m=+122.053678457" Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.753447 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-tw52v" event={"ID":"7584ea80-3537-4eda-a17d-6d0ef3c0d7ca","Type":"ContainerStarted","Data":"c9367f9586b8508e7d35f1d4bc18fd7c402cb356a2f24b81fa11e4cfe7d6843d"} Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.793620 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-8f27m" event={"ID":"a7fdfe97-5098-4468-92b7-881bc4270004","Type":"ContainerStarted","Data":"6ddcb511d3901b79452a2641dee52272af1679b1b48c4bc03a81f02273cf15c2"} Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.793681 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-8f27m" event={"ID":"a7fdfe97-5098-4468-92b7-881bc4270004","Type":"ContainerStarted","Data":"f01be9ad6071723c2499fd4ad7c3dc942e340d35295bec66d4645d98b7cabf68"} Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.802465 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-8f27m" Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.802628 5109 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-8f27m container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" start-of-body= Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.802674 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-8f27m" podUID="a7fdfe97-5098-4468-92b7-881bc4270004" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.805541 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:50 crc kubenswrapper[5109]: E0217 00:10:50.806137 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:51.306115036 +0000 UTC m=+122.637669794 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.806222 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:50 crc kubenswrapper[5109]: E0217 00:10:50.809062 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:51.309047423 +0000 UTC m=+122.640602181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.842910 5109 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-rw5p4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 00:10:50 crc kubenswrapper[5109]: [-]has-synced failed: reason withheld Feb 17 00:10:50 crc kubenswrapper[5109]: [+]process-running ok Feb 17 00:10:50 crc kubenswrapper[5109]: healthz check failed Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.842968 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" podUID="a4d85031-8c4b-4260-9279-77b7e3a7d75d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.870241 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2bqqx" event={"ID":"9e0b7021-c99b-4fab-9c19-51affb4ad611","Type":"ContainerStarted","Data":"adfc0c7bb284f622d655604286c65de2efb44e8012f3d994db2fb41589506111"} Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.880975 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-dqtqd" event={"ID":"6335485b-ac6f-4574-9590-d11aee2f8cf5","Type":"ContainerStarted","Data":"15f4d344263b7ff53899dae6e85654732e7185397a00196d189f31adca944123"} Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.884031 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-fp844" podStartSLOduration=99.884016682 podStartE2EDuration="1m39.884016682s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:50.882640326 +0000 UTC m=+122.214195084" watchObservedRunningTime="2026-02-17 00:10:50.884016682 +0000 UTC m=+122.215571440" Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.909930 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:50 crc kubenswrapper[5109]: E0217 00:10:50.911185 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:51.411165375 +0000 UTC m=+122.742720143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.936069 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vhpw4" event={"ID":"9136f9dd-6527-4547-8085-2bc46041383b","Type":"ContainerStarted","Data":"f05a8bfde15b8343ddffe0939b4263497f0831238d5c2be172a864404235b3b8"} Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.941682 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-nxc72" podStartSLOduration=99.941662306 podStartE2EDuration="1m39.941662306s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:50.938460952 +0000 UTC m=+122.270015710" watchObservedRunningTime="2026-02-17 00:10:50.941662306 +0000 UTC m=+122.273217064" Feb 17 00:10:50 crc kubenswrapper[5109]: I0217 00:10:50.997355 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-jdfgh" event={"ID":"59b9b2cd-1da5-4c42-8ec2-fff0e1726882","Type":"ContainerStarted","Data":"9e9ba459c79c8d90b4fbab417f45a210bae1a061aa52625561bb21abc6686f04"} Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.011846 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:51 crc kubenswrapper[5109]: E0217 00:10:51.012156 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:51.512136768 +0000 UTC m=+122.843691526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.012917 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-8dzhp" podStartSLOduration=100.012895208 podStartE2EDuration="1m40.012895208s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:51.006684615 +0000 UTC m=+122.338239373" watchObservedRunningTime="2026-02-17 00:10:51.012895208 +0000 UTC m=+122.344449966" Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.019291 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2tl49" event={"ID":"03c9c6ff-db85-463c-8af8-5589f2af76f0","Type":"ContainerStarted","Data":"0050d765f00c5b32087e8d96be052835cf4b3f444e88a8e79fbab49ad14f6d2b"} Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.054645 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-mlw2f" event={"ID":"e3bc1644-c40b-4971-9b76-c7c63a334ed3","Type":"ContainerStarted","Data":"ed178dd45274f5493674a096d41f0c1a45eebf37e3d019f177d609d2c9498746"} Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.098052 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" podStartSLOduration=100.098030944 podStartE2EDuration="1m40.098030944s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:51.053828073 +0000 UTC m=+122.385382831" watchObservedRunningTime="2026-02-17 00:10:51.098030944 +0000 UTC m=+122.429585702" Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.099632 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-8f27m" podStartSLOduration=100.099621496 podStartE2EDuration="1m40.099621496s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:51.092253012 +0000 UTC m=+122.423807770" watchObservedRunningTime="2026-02-17 00:10:51.099621496 +0000 UTC m=+122.431176254" Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.104123 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-cwkk6" event={"ID":"99aef4f8-4236-448e-94bb-cca311ff5d9b","Type":"ContainerStarted","Data":"2c324277f4603504f4afe037e210d302b514b6ec0a6ff1d5cdc9b2cd9f7f3d75"} Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.112910 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:51 crc kubenswrapper[5109]: E0217 00:10:51.113408 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:51.613387348 +0000 UTC m=+122.944942106 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.131524 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4xcq8" event={"ID":"c21fab5e-ccd1-41fb-8fb4-588d0fd9ebb1","Type":"ContainerStarted","Data":"99c370bca5db8d0714b56d2959df82d67baa6b6cb0d0ce82735616a083615fb6"} Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.150230 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-g5z9b" event={"ID":"37d92c5c-658a-4b48-bbb5-db94b12d98a1","Type":"ContainerStarted","Data":"8ff07ced7837e45baf7ab2cf7b6f0c419709c17f0251a17cd9244d1553435171"} Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.150270 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-g5z9b" event={"ID":"37d92c5c-658a-4b48-bbb5-db94b12d98a1","Type":"ContainerStarted","Data":"8c848368d17256aec7e9548011296df82609e9d4615403b6a9a0d42d4bd6ce19"} Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.170002 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-dqtqd" podStartSLOduration=100.169982564 podStartE2EDuration="1m40.169982564s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:51.131710519 +0000 UTC m=+122.463265277" watchObservedRunningTime="2026-02-17 00:10:51.169982564 +0000 UTC m=+122.501537322" Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.170400 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-vhpw4" podStartSLOduration=100.170393335 podStartE2EDuration="1m40.170393335s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:51.168255889 +0000 UTC m=+122.499810647" watchObservedRunningTime="2026-02-17 00:10:51.170393335 +0000 UTC m=+122.501948093" Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.209216 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6nhg5" event={"ID":"a6fd08da-3534-483e-a717-cad005275c5a","Type":"ContainerStarted","Data":"c9e838c3425dcb84eb4c37f1c2acc39e608393f0428613b5e498b1e6e5e6b9a3"} Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.218690 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:51 crc kubenswrapper[5109]: E0217 00:10:51.218949 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:51.71893673 +0000 UTC m=+123.050491488 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.225493 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5mdds" event={"ID":"4ce76fa1-2f29-4c5a-b2d6-7869af7b25a8","Type":"ContainerStarted","Data":"15b6163728d3179922d9dc1d4afd4fbbee2b46caf01ffc3a12371f5d4a1b1e30"} Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.225524 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5mdds" event={"ID":"4ce76fa1-2f29-4c5a-b2d6-7869af7b25a8","Type":"ContainerStarted","Data":"edcbb423433b48e89101a8adfdbd01eb3fa5e323060b5e77e3d48d4db779f04e"} Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.227183 5109 patch_prober.go:28] interesting pod/downloads-747b44746d-5pqmv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.227240 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-5pqmv" podUID="abd1baa1-4b4c-459b-b487-5dd283fe0ad9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.244001 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-g5z9b" podStartSLOduration=7.243970968 podStartE2EDuration="7.243970968s" podCreationTimestamp="2026-02-17 00:10:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:51.217977115 +0000 UTC m=+122.549531863" watchObservedRunningTime="2026-02-17 00:10:51.243970968 +0000 UTC m=+122.575525726" Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.248809 5109 ???:1] "http: TLS handshake error from 192.168.126.11:59406: no serving certificate available for the kubelet" Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.250784 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-rgvbj" Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.277557 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-6nhg5" podStartSLOduration=100.27754236 podStartE2EDuration="1m40.27754236s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:51.248293661 +0000 UTC m=+122.579848419" watchObservedRunningTime="2026-02-17 00:10:51.27754236 +0000 UTC m=+122.609097118" Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.320094 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:51 crc kubenswrapper[5109]: E0217 00:10:51.320307 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:51.820282933 +0000 UTC m=+123.151837691 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.320851 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:51 crc kubenswrapper[5109]: E0217 00:10:51.324200 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:51.824187344 +0000 UTC m=+123.155742102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.353846 5109 ???:1] "http: TLS handshake error from 192.168.126.11:59414: no serving certificate available for the kubelet" Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.367685 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-mrr4k"] Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.426276 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:51 crc kubenswrapper[5109]: E0217 00:10:51.426735 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:51.926718137 +0000 UTC m=+123.258272895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.453864 5109 ???:1] "http: TLS handshake error from 192.168.126.11:59422: no serving certificate available for the kubelet" Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.497103 5109 ???:1] "http: TLS handshake error from 192.168.126.11:59426: no serving certificate available for the kubelet" Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.531180 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:51 crc kubenswrapper[5109]: E0217 00:10:51.531656 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:52.031638214 +0000 UTC m=+123.363192972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.544389 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.544778 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.550700 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.561133 5109 ???:1] "http: TLS handshake error from 192.168.126.11:59428: no serving certificate available for the kubelet" Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.638397 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:51 crc kubenswrapper[5109]: E0217 00:10:51.638827 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:52.138808589 +0000 UTC m=+123.470363347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.667936 5109 ???:1] "http: TLS handshake error from 192.168.126.11:59442: no serving certificate available for the kubelet" Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.740644 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:51 crc kubenswrapper[5109]: E0217 00:10:51.741044 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:52.241027744 +0000 UTC m=+123.572582502 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.813416 5109 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-rw5p4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 00:10:51 crc kubenswrapper[5109]: [-]has-synced failed: reason withheld Feb 17 00:10:51 crc kubenswrapper[5109]: [+]process-running ok Feb 17 00:10:51 crc kubenswrapper[5109]: healthz check failed Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.813478 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" podUID="a4d85031-8c4b-4260-9279-77b7e3a7d75d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.841518 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:51 crc kubenswrapper[5109]: E0217 00:10:51.841978 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:52.341961955 +0000 UTC m=+123.673516713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.893684 5109 ???:1] "http: TLS handshake error from 192.168.126.11:59456: no serving certificate available for the kubelet" Feb 17 00:10:51 crc kubenswrapper[5109]: I0217 00:10:51.945485 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:51 crc kubenswrapper[5109]: E0217 00:10:51.946227 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:52.446211464 +0000 UTC m=+123.777766222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.047019 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:52 crc kubenswrapper[5109]: E0217 00:10:52.047324 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:52.54730891 +0000 UTC m=+123.878863668 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.148171 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:52 crc kubenswrapper[5109]: E0217 00:10:52.148539 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:52.648527339 +0000 UTC m=+123.980082097 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.235395 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-nz44x" event={"ID":"cd7d724c-de71-4a3f-b43d-1d09799504cb","Type":"ContainerStarted","Data":"57fc13582be7d8b642ef8681c72fe3a00abde3ad2cd35d827afa468fee269d49"} Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.237641 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9mrrp" event={"ID":"cf6fbed8-60b9-46db-a4e7-efc414eed3c3","Type":"ContainerStarted","Data":"78f5cf394971d1f3369c7e07474c242fcc24a6e008df6bce4b96c1018fb9aad0"} Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.238530 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9mrrp" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.240319 5109 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-9mrrp container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.240361 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9mrrp" podUID="cf6fbed8-60b9-46db-a4e7-efc414eed3c3" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.245636 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" event={"ID":"fcfae8bf-91d7-48d3-a978-1510fe282c92","Type":"ContainerStarted","Data":"a4afc82ba182f7db796283575d801ceded0d6e246e39ec851cf0a243f357d65f"} Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.245673 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" event={"ID":"fcfae8bf-91d7-48d3-a978-1510fe282c92","Type":"ContainerStarted","Data":"91fc6f8044a2b658bd188055cca2e52f5278e2880469fd1e91e07b13801628c6"} Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.249341 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:52 crc kubenswrapper[5109]: E0217 00:10:52.249738 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:52.749722327 +0000 UTC m=+124.081277085 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.253735 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-tw52v" event={"ID":"7584ea80-3537-4eda-a17d-6d0ef3c0d7ca","Type":"ContainerStarted","Data":"a570eab330b32e9db2ce90ab889ad1463594b5ad43c5c28d8f3ed032dd435e65"} Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.253784 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-tw52v" event={"ID":"7584ea80-3537-4eda-a17d-6d0ef3c0d7ca","Type":"ContainerStarted","Data":"cf05ba973a35b41e6805a5184694b2ba90c65e4fc00522c5608a61f18a507cdb"} Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.257421 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2bqqx" event={"ID":"9e0b7021-c99b-4fab-9c19-51affb4ad611","Type":"ContainerStarted","Data":"77299341854b0d5279782fc1da9d71dd5e6520c6d2acc17702bd3fa68893f582"} Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.257453 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2bqqx" event={"ID":"9e0b7021-c99b-4fab-9c19-51affb4ad611","Type":"ContainerStarted","Data":"f506c7a2b1724c09d165d10531e9933436669ae28c83b7d9af83d1b3dada0443"} Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.259386 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vmcp7" event={"ID":"1da6d131-80fb-4fce-9613-5dc3c320889f","Type":"ContainerStarted","Data":"a4edee78725a7e3e0f815597f1fe05638e246c31bc121eb81c1e7e8347baa337"} Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.259420 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vmcp7" event={"ID":"1da6d131-80fb-4fce-9613-5dc3c320889f","Type":"ContainerStarted","Data":"502151afb85b3e790549d2837acc95c3e2f6f845c6e24b51292fcbc488b21a7a"} Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.259431 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vmcp7" event={"ID":"1da6d131-80fb-4fce-9613-5dc3c320889f","Type":"ContainerStarted","Data":"8796badbdcd21ac5d7b2297c23e5ff95649ab389416e6eca2dcf41e24e8b30df"} Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.261001 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-dqtqd" event={"ID":"6335485b-ac6f-4574-9590-d11aee2f8cf5","Type":"ContainerStarted","Data":"32cf316680d87797e91f6208ae4abd83a728fd917b7d02af443d676bd890960b"} Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.263273 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-jdfgh" event={"ID":"59b9b2cd-1da5-4c42-8ec2-fff0e1726882","Type":"ContainerStarted","Data":"e26645e2abbacb1f83e3e8df74f6fabac8e51a35d0168c21f6990cf26ba97ff7"} Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.265330 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2tl49" event={"ID":"03c9c6ff-db85-463c-8af8-5589f2af76f0","Type":"ContainerStarted","Data":"a30e11c69b96017b6d18352d0e077d7f9aa0c268050cc7654feb4ce8ac2dcb67"} Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.265369 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2tl49" event={"ID":"03c9c6ff-db85-463c-8af8-5589f2af76f0","Type":"ContainerStarted","Data":"7772b9e2337f66ba5b0d63563b904d2222bb505a19b6eb65e50e9a859093e958"} Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.265765 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2tl49" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.269956 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-nz44x" podStartSLOduration=101.269937258 podStartE2EDuration="1m41.269937258s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:52.266060146 +0000 UTC m=+123.597614904" watchObservedRunningTime="2026-02-17 00:10:52.269937258 +0000 UTC m=+123.601492016" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.281914 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-mlw2f" event={"ID":"e3bc1644-c40b-4971-9b76-c7c63a334ed3","Type":"ContainerStarted","Data":"18d211c6e9b5d5ec61b94dee8b085e271b89bcb9e4a4759d93c48f5c45f2e46f"} Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.281976 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-mlw2f" event={"ID":"e3bc1644-c40b-4971-9b76-c7c63a334ed3","Type":"ContainerStarted","Data":"2d68379cf9f6c66af5b288ce945116c0940b6217777986901facfc833ccb0f8c"} Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.285084 5109 ???:1] "http: TLS handshake error from 192.168.126.11:59472: no serving certificate available for the kubelet" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.300135 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-cwkk6" event={"ID":"99aef4f8-4236-448e-94bb-cca311ff5d9b","Type":"ContainerStarted","Data":"54541c0282abc04d60cac5a3036df70d2ca1e66689646d0fcc6fbe76bb85e86f"} Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.328141 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4xcq8" event={"ID":"c21fab5e-ccd1-41fb-8fb4-588d0fd9ebb1","Type":"ContainerStarted","Data":"08c46a6926bc2518a897356b775de7da29f07625947ec496a0a0eca2e1a04e66"} Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.351455 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:52 crc kubenswrapper[5109]: E0217 00:10:52.355680 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:52.85566667 +0000 UTC m=+124.187221428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.361344 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5mdds" event={"ID":"4ce76fa1-2f29-4c5a-b2d6-7869af7b25a8","Type":"ContainerStarted","Data":"0b8f9b47fce94d1b184c2eafed67d8b0e32702da0f7c4bef41eabfc187e6b00d"} Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.362098 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-5mdds" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.377351 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xmqnl" event={"ID":"9448c632-b480-4584-be47-2b81207d4346","Type":"ContainerStarted","Data":"abed59c9bbce235dce935767ed070c3aabb24431c2bb5a8a43337db5b39f075a"} Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.377409 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xmqnl" event={"ID":"9448c632-b480-4584-be47-2b81207d4346","Type":"ContainerStarted","Data":"be4565f8763778f763f85309c18397d4672209e3293b0ae4899ce28992d393f6"} Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.393237 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4rxx6" event={"ID":"6b63354c-ebcc-4f47-895a-dc4361085ce6","Type":"ContainerStarted","Data":"bfc80bd02be9aa87b041caf630fca286f4736a6efbfc8fb55e1f91192f0a482b"} Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.393273 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4rxx6" event={"ID":"6b63354c-ebcc-4f47-895a-dc4361085ce6","Type":"ContainerStarted","Data":"c2ea486bfda954acb267a5b99d33df379d5b2618e35fa6e7dc870495248f0e0f"} Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.403528 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9mrrp" podStartSLOduration=101.403490406 podStartE2EDuration="1m41.403490406s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:52.372007399 +0000 UTC m=+123.703562167" watchObservedRunningTime="2026-02-17 00:10:52.403490406 +0000 UTC m=+123.735045164" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.405863 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-47trg" event={"ID":"0a192868-bc11-40e5-92ff-01df13a30588","Type":"ContainerStarted","Data":"6f7c4f012fe099e0b1b7425eb0fe983559e5a835ac6a65beae6a413205eff4af"} Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.406654 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-47trg" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.411604 5109 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-47trg container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" start-of-body= Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.411670 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-47trg" podUID="0a192868-bc11-40e5-92ff-01df13a30588" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.415122 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6kxlb" event={"ID":"a095f37e-9225-494c-908e-66bfa750642b","Type":"ContainerStarted","Data":"5ca9434127513ca3cdb4762fd010341916eaf95b01960d8ca8a5a7ecd15ae8d1"} Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.416343 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6kxlb" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.425509 5109 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-8f27m container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" start-of-body= Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.425578 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-8f27m" podUID="a7fdfe97-5098-4468-92b7-881bc4270004" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.426159 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-rcf5s" event={"ID":"2ffee5ff-84cf-4dfa-816b-ca1f8b763069","Type":"ContainerStarted","Data":"af36af24e18a33042499821addd1b3c7ad154110236003192e51f0e4a05189ad"} Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.427359 5109 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-6kxlb container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.427413 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6kxlb" podUID="a095f37e-9225-494c-908e-66bfa750642b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.436700 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-dhxx7" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.441945 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-l5t5g" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.447890 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-jdfgh" podStartSLOduration=101.447876922 podStartE2EDuration="1m41.447876922s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:52.405461748 +0000 UTC m=+123.737016506" watchObservedRunningTime="2026-02-17 00:10:52.447876922 +0000 UTC m=+123.779431680" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.454291 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:52 crc kubenswrapper[5109]: E0217 00:10:52.455926 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:52.955911223 +0000 UTC m=+124.287465981 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.522063 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-tw52v" podStartSLOduration=101.522044611 podStartE2EDuration="1m41.522044611s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:52.450583063 +0000 UTC m=+123.782137821" watchObservedRunningTime="2026-02-17 00:10:52.522044611 +0000 UTC m=+123.853599369" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.523614 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2tl49" podStartSLOduration=101.523607292 podStartE2EDuration="1m41.523607292s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:52.521585649 +0000 UTC m=+123.853140407" watchObservedRunningTime="2026-02-17 00:10:52.523607292 +0000 UTC m=+123.855162050" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.555955 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" podStartSLOduration=101.555929721 podStartE2EDuration="1m41.555929721s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:52.553341453 +0000 UTC m=+123.884896211" watchObservedRunningTime="2026-02-17 00:10:52.555929721 +0000 UTC m=+123.887484479" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.557397 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:52 crc kubenswrapper[5109]: E0217 00:10:52.559500 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:53.059486314 +0000 UTC m=+124.391041072 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.583334 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-vmcp7" podStartSLOduration=102.58331143 podStartE2EDuration="1m42.58331143s" podCreationTimestamp="2026-02-17 00:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:52.573243055 +0000 UTC m=+123.904797813" watchObservedRunningTime="2026-02-17 00:10:52.58331143 +0000 UTC m=+123.914866188" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.620803 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-2bqqx" podStartSLOduration=101.620701262 podStartE2EDuration="1m41.620701262s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:52.619351877 +0000 UTC m=+123.950906635" watchObservedRunningTime="2026-02-17 00:10:52.620701262 +0000 UTC m=+123.952256020" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.645887 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-4rxx6" podStartSLOduration=101.645872993 podStartE2EDuration="1m41.645872993s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:52.641303293 +0000 UTC m=+123.972858051" watchObservedRunningTime="2026-02-17 00:10:52.645872993 +0000 UTC m=+123.977427751" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.666308 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:52 crc kubenswrapper[5109]: E0217 00:10:52.666772 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:53.166751352 +0000 UTC m=+124.498306120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.708978 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6kxlb" podStartSLOduration=101.708964901 podStartE2EDuration="1m41.708964901s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:52.705732186 +0000 UTC m=+124.037286944" watchObservedRunningTime="2026-02-17 00:10:52.708964901 +0000 UTC m=+124.040519659" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.767916 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:52 crc kubenswrapper[5109]: E0217 00:10:52.768384 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:53.268373061 +0000 UTC m=+124.599927819 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.769419 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-cwkk6" podStartSLOduration=101.769395578 podStartE2EDuration="1m41.769395578s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:52.761903461 +0000 UTC m=+124.093458219" watchObservedRunningTime="2026-02-17 00:10:52.769395578 +0000 UTC m=+124.100950336" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.793640 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-xmqnl" podStartSLOduration=101.793617874 podStartE2EDuration="1m41.793617874s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:52.791639322 +0000 UTC m=+124.123194080" watchObservedRunningTime="2026-02-17 00:10:52.793617874 +0000 UTC m=+124.125172632" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.810809 5109 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-rw5p4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 00:10:52 crc kubenswrapper[5109]: [-]has-synced failed: reason withheld Feb 17 00:10:52 crc kubenswrapper[5109]: [+]process-running ok Feb 17 00:10:52 crc kubenswrapper[5109]: healthz check failed Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.811152 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" podUID="a4d85031-8c4b-4260-9279-77b7e3a7d75d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.812722 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-4xcq8" podStartSLOduration=101.812710016 podStartE2EDuration="1m41.812710016s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:52.81170679 +0000 UTC m=+124.143261548" watchObservedRunningTime="2026-02-17 00:10:52.812710016 +0000 UTC m=+124.144264774" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.835080 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.835133 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.836017 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-mlw2f" podStartSLOduration=101.836002898 podStartE2EDuration="1m41.836002898s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:52.835210477 +0000 UTC m=+124.166765235" watchObservedRunningTime="2026-02-17 00:10:52.836002898 +0000 UTC m=+124.167557646" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.843324 5109 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-qsvff container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.17:8443/livez\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.843376 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" podUID="fcfae8bf-91d7-48d3-a978-1510fe282c92" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.17:8443/livez\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.864036 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-47trg" podStartSLOduration=101.864016764 podStartE2EDuration="1m41.864016764s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:52.861611971 +0000 UTC m=+124.193166889" watchObservedRunningTime="2026-02-17 00:10:52.864016764 +0000 UTC m=+124.195571512" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.871370 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:52 crc kubenswrapper[5109]: E0217 00:10:52.871882 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:53.37186404 +0000 UTC m=+124.703418798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.883405 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-5mdds" podStartSLOduration=8.883386463 podStartE2EDuration="8.883386463s" podCreationTimestamp="2026-02-17 00:10:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:52.882943241 +0000 UTC m=+124.214498009" watchObservedRunningTime="2026-02-17 00:10:52.883386463 +0000 UTC m=+124.214941221" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.897830 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-rcf5s" podStartSLOduration=101.897812592 podStartE2EDuration="1m41.897812592s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:52.897225846 +0000 UTC m=+124.228780624" watchObservedRunningTime="2026-02-17 00:10:52.897812592 +0000 UTC m=+124.229367350" Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.973212 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:52 crc kubenswrapper[5109]: E0217 00:10:52.973678 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:53.473656204 +0000 UTC m=+124.805211022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:52 crc kubenswrapper[5109]: I0217 00:10:52.982725 5109 ???:1] "http: TLS handshake error from 192.168.126.11:59480: no serving certificate available for the kubelet" Feb 17 00:10:53 crc kubenswrapper[5109]: I0217 00:10:53.073781 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:53 crc kubenswrapper[5109]: E0217 00:10:53.073971 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:53.573941828 +0000 UTC m=+124.905496586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:53 crc kubenswrapper[5109]: I0217 00:10:53.074170 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:53 crc kubenswrapper[5109]: E0217 00:10:53.074500 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:53.574486213 +0000 UTC m=+124.906040961 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:53 crc kubenswrapper[5109]: I0217 00:10:53.175830 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:53 crc kubenswrapper[5109]: E0217 00:10:53.176036 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:53.676008539 +0000 UTC m=+125.007563297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:53 crc kubenswrapper[5109]: I0217 00:10:53.176372 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:53 crc kubenswrapper[5109]: E0217 00:10:53.176642 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:53.676629356 +0000 UTC m=+125.008184114 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:53 crc kubenswrapper[5109]: I0217 00:10:53.277090 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:53 crc kubenswrapper[5109]: E0217 00:10:53.277211 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:53.777185507 +0000 UTC m=+125.108740265 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:53 crc kubenswrapper[5109]: I0217 00:10:53.277537 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:53 crc kubenswrapper[5109]: E0217 00:10:53.277865 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:53.777851955 +0000 UTC m=+125.109406713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:53 crc kubenswrapper[5109]: I0217 00:10:53.379498 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:53 crc kubenswrapper[5109]: E0217 00:10:53.379865 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:53.879827574 +0000 UTC m=+125.211382342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:53 crc kubenswrapper[5109]: I0217 00:10:53.380215 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:53 crc kubenswrapper[5109]: E0217 00:10:53.380554 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:53.880539102 +0000 UTC m=+125.212093860 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:53 crc kubenswrapper[5109]: I0217 00:10:53.432040 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-546f6" event={"ID":"693f1ea3-457d-4233-844f-1125adaa9fa9","Type":"ContainerStarted","Data":"2c1a654e43e854bf38268871ec320b633316b817daa1d21a3e4cc052b5736023"} Feb 17 00:10:53 crc kubenswrapper[5109]: I0217 00:10:53.432672 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-mrr4k" podUID="b3e6e84e-201d-45cb-a34e-351fcc111c55" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://56eba437445dd741d363ddbbafd1a629d5ff6bb99cbd659a0ce57cebe6b0aec6" gracePeriod=30 Feb 17 00:10:53 crc kubenswrapper[5109]: I0217 00:10:53.438905 5109 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-8f27m container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" start-of-body= Feb 17 00:10:53 crc kubenswrapper[5109]: I0217 00:10:53.438970 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-8f27m" podUID="a7fdfe97-5098-4468-92b7-881bc4270004" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" Feb 17 00:10:53 crc kubenswrapper[5109]: I0217 00:10:53.450277 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-6kxlb" Feb 17 00:10:53 crc kubenswrapper[5109]: I0217 00:10:53.453632 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9mrrp" Feb 17 00:10:53 crc kubenswrapper[5109]: I0217 00:10:53.481826 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:53 crc kubenswrapper[5109]: E0217 00:10:53.482191 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:53.982174212 +0000 UTC m=+125.313728970 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:53 crc kubenswrapper[5109]: I0217 00:10:53.584469 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:53 crc kubenswrapper[5109]: E0217 00:10:53.586881 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:54.086862942 +0000 UTC m=+125.418417700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:53 crc kubenswrapper[5109]: I0217 00:10:53.685493 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:53 crc kubenswrapper[5109]: E0217 00:10:53.685868 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:54.185849012 +0000 UTC m=+125.517403770 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:53 crc kubenswrapper[5109]: I0217 00:10:53.709747 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-47trg" Feb 17 00:10:53 crc kubenswrapper[5109]: I0217 00:10:53.786899 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:53 crc kubenswrapper[5109]: E0217 00:10:53.787286 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:54.287271397 +0000 UTC m=+125.618826155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:53 crc kubenswrapper[5109]: I0217 00:10:53.811767 5109 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-rw5p4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 00:10:53 crc kubenswrapper[5109]: [-]has-synced failed: reason withheld Feb 17 00:10:53 crc kubenswrapper[5109]: [+]process-running ok Feb 17 00:10:53 crc kubenswrapper[5109]: healthz check failed Feb 17 00:10:53 crc kubenswrapper[5109]: I0217 00:10:53.811823 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" podUID="a4d85031-8c4b-4260-9279-77b7e3a7d75d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 00:10:53 crc kubenswrapper[5109]: I0217 00:10:53.887854 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:53 crc kubenswrapper[5109]: E0217 00:10:53.888239 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:54.388218028 +0000 UTC m=+125.719772786 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:53 crc kubenswrapper[5109]: I0217 00:10:53.989139 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:53 crc kubenswrapper[5109]: E0217 00:10:53.989584 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:54.489566501 +0000 UTC m=+125.821121259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.063259 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.090071 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:54 crc kubenswrapper[5109]: E0217 00:10:54.090284 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:54.590258756 +0000 UTC m=+125.921813514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.142557 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.157980 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.162056 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.162345 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.168982 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.191843 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:54 crc kubenswrapper[5109]: E0217 00:10:54.192478 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:54.692462881 +0000 UTC m=+126.024017639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.233705 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.242849 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.243012 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.246021 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.246074 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.293275 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.293439 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23dfaa7e-0474-427a-812c-1131a2015031-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"23dfaa7e-0474-427a-812c-1131a2015031\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.293465 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23dfaa7e-0474-427a-812c-1131a2015031-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"23dfaa7e-0474-427a-812c-1131a2015031\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 17 00:10:54 crc kubenswrapper[5109]: E0217 00:10:54.293680 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:54.793665309 +0000 UTC m=+126.125220067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.311382 5109 ???:1] "http: TLS handshake error from 192.168.126.11:55262: no serving certificate available for the kubelet" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.394437 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e89f769-6dbf-45de-a435-c5c7439b06d0-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"8e89f769-6dbf-45de-a435-c5c7439b06d0\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.394495 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e89f769-6dbf-45de-a435-c5c7439b06d0-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"8e89f769-6dbf-45de-a435-c5c7439b06d0\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.394555 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.394616 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23dfaa7e-0474-427a-812c-1131a2015031-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"23dfaa7e-0474-427a-812c-1131a2015031\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.394640 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23dfaa7e-0474-427a-812c-1131a2015031-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"23dfaa7e-0474-427a-812c-1131a2015031\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 17 00:10:54 crc kubenswrapper[5109]: E0217 00:10:54.395223 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:54.895207557 +0000 UTC m=+126.226762315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.395392 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23dfaa7e-0474-427a-812c-1131a2015031-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"23dfaa7e-0474-427a-812c-1131a2015031\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.426797 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23dfaa7e-0474-427a-812c-1131a2015031-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"23dfaa7e-0474-427a-812c-1131a2015031\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.474066 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vjl8l"] Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.478269 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vjl8l" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.480090 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.480670 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.493755 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vjl8l"] Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.495700 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.495954 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e89f769-6dbf-45de-a435-c5c7439b06d0-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"8e89f769-6dbf-45de-a435-c5c7439b06d0\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.496021 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e89f769-6dbf-45de-a435-c5c7439b06d0-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"8e89f769-6dbf-45de-a435-c5c7439b06d0\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.496151 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e89f769-6dbf-45de-a435-c5c7439b06d0-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"8e89f769-6dbf-45de-a435-c5c7439b06d0\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 17 00:10:54 crc kubenswrapper[5109]: E0217 00:10:54.496241 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:54.99622129 +0000 UTC m=+126.327776048 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.512383 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e89f769-6dbf-45de-a435-c5c7439b06d0-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"8e89f769-6dbf-45de-a435-c5c7439b06d0\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.562000 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.597891 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a26206-1199-4cf4-912a-fa5e03a96713-utilities\") pod \"certified-operators-vjl8l\" (UID: \"74a26206-1199-4cf4-912a-fa5e03a96713\") " pod="openshift-marketplace/certified-operators-vjl8l" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.597929 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a26206-1199-4cf4-912a-fa5e03a96713-catalog-content\") pod \"certified-operators-vjl8l\" (UID: \"74a26206-1199-4cf4-912a-fa5e03a96713\") " pod="openshift-marketplace/certified-operators-vjl8l" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.598038 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.598057 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r9x4\" (UniqueName: \"kubernetes.io/projected/74a26206-1199-4cf4-912a-fa5e03a96713-kube-api-access-2r9x4\") pod \"certified-operators-vjl8l\" (UID: \"74a26206-1199-4cf4-912a-fa5e03a96713\") " pod="openshift-marketplace/certified-operators-vjl8l" Feb 17 00:10:54 crc kubenswrapper[5109]: E0217 00:10:54.604777 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:55.104763091 +0000 UTC m=+126.436317839 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.687806 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-78v77"] Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.700062 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.700261 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a26206-1199-4cf4-912a-fa5e03a96713-utilities\") pod \"certified-operators-vjl8l\" (UID: \"74a26206-1199-4cf4-912a-fa5e03a96713\") " pod="openshift-marketplace/certified-operators-vjl8l" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.700282 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a26206-1199-4cf4-912a-fa5e03a96713-catalog-content\") pod \"certified-operators-vjl8l\" (UID: \"74a26206-1199-4cf4-912a-fa5e03a96713\") " pod="openshift-marketplace/certified-operators-vjl8l" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.700352 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2r9x4\" (UniqueName: \"kubernetes.io/projected/74a26206-1199-4cf4-912a-fa5e03a96713-kube-api-access-2r9x4\") pod \"certified-operators-vjl8l\" (UID: \"74a26206-1199-4cf4-912a-fa5e03a96713\") " pod="openshift-marketplace/certified-operators-vjl8l" Feb 17 00:10:54 crc kubenswrapper[5109]: E0217 00:10:54.700796 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:55.200779064 +0000 UTC m=+126.532333822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.701141 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a26206-1199-4cf4-912a-fa5e03a96713-utilities\") pod \"certified-operators-vjl8l\" (UID: \"74a26206-1199-4cf4-912a-fa5e03a96713\") " pod="openshift-marketplace/certified-operators-vjl8l" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.701393 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a26206-1199-4cf4-912a-fa5e03a96713-catalog-content\") pod \"certified-operators-vjl8l\" (UID: \"74a26206-1199-4cf4-912a-fa5e03a96713\") " pod="openshift-marketplace/certified-operators-vjl8l" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.708707 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-78v77"] Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.708985 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-78v77" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.713365 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.745448 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r9x4\" (UniqueName: \"kubernetes.io/projected/74a26206-1199-4cf4-912a-fa5e03a96713-kube-api-access-2r9x4\") pod \"certified-operators-vjl8l\" (UID: \"74a26206-1199-4cf4-912a-fa5e03a96713\") " pod="openshift-marketplace/certified-operators-vjl8l" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.792048 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vjl8l" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.801133 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7386e430-c5f0-467b-9375-4eab8c181f1b-catalog-content\") pod \"community-operators-78v77\" (UID: \"7386e430-c5f0-467b-9375-4eab8c181f1b\") " pod="openshift-marketplace/community-operators-78v77" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.801236 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.801285 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dnzv\" (UniqueName: \"kubernetes.io/projected/7386e430-c5f0-467b-9375-4eab8c181f1b-kube-api-access-4dnzv\") pod \"community-operators-78v77\" (UID: \"7386e430-c5f0-467b-9375-4eab8c181f1b\") " pod="openshift-marketplace/community-operators-78v77" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.801349 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7386e430-c5f0-467b-9375-4eab8c181f1b-utilities\") pod \"community-operators-78v77\" (UID: \"7386e430-c5f0-467b-9375-4eab8c181f1b\") " pod="openshift-marketplace/community-operators-78v77" Feb 17 00:10:54 crc kubenswrapper[5109]: E0217 00:10:54.801634 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:55.301622033 +0000 UTC m=+126.633176791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.808944 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.812978 5109 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-rw5p4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 00:10:54 crc kubenswrapper[5109]: [-]has-synced failed: reason withheld Feb 17 00:10:54 crc kubenswrapper[5109]: [+]process-running ok Feb 17 00:10:54 crc kubenswrapper[5109]: healthz check failed Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.813030 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" podUID="a4d85031-8c4b-4260-9279-77b7e3a7d75d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.826151 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 17 00:10:54 crc kubenswrapper[5109]: W0217 00:10:54.848781 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod23dfaa7e_0474_427a_812c_1131a2015031.slice/crio-99f461d4dd0c18be6db15f4b00d50abcb8f6bf2716e0453b9c378f1c2130a0f2 WatchSource:0}: Error finding container 99f461d4dd0c18be6db15f4b00d50abcb8f6bf2716e0453b9c378f1c2130a0f2: Status 404 returned error can't find the container with id 99f461d4dd0c18be6db15f4b00d50abcb8f6bf2716e0453b9c378f1c2130a0f2 Feb 17 00:10:54 crc kubenswrapper[5109]: W0217 00:10:54.850325 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8e89f769_6dbf_45de_a435_c5c7439b06d0.slice/crio-84c6842f20024b8f95493fcdb1c29fde96ef17b8c6e2c3f6dea5de7cf243ff7e WatchSource:0}: Error finding container 84c6842f20024b8f95493fcdb1c29fde96ef17b8c6e2c3f6dea5de7cf243ff7e: Status 404 returned error can't find the container with id 84c6842f20024b8f95493fcdb1c29fde96ef17b8c6e2c3f6dea5de7cf243ff7e Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.871767 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zf6ds"] Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.881376 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zf6ds" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.884073 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zf6ds"] Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.924647 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:54 crc kubenswrapper[5109]: E0217 00:10:54.924771 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:55.424747916 +0000 UTC m=+126.756302674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.925178 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4dnzv\" (UniqueName: \"kubernetes.io/projected/7386e430-c5f0-467b-9375-4eab8c181f1b-kube-api-access-4dnzv\") pod \"community-operators-78v77\" (UID: \"7386e430-c5f0-467b-9375-4eab8c181f1b\") " pod="openshift-marketplace/community-operators-78v77" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.925227 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6xkq\" (UniqueName: \"kubernetes.io/projected/dad5932b-d8a0-4ca7-ad78-ae817fbc3be6-kube-api-access-q6xkq\") pod \"certified-operators-zf6ds\" (UID: \"dad5932b-d8a0-4ca7-ad78-ae817fbc3be6\") " pod="openshift-marketplace/certified-operators-zf6ds" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.925249 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7386e430-c5f0-467b-9375-4eab8c181f1b-utilities\") pod \"community-operators-78v77\" (UID: \"7386e430-c5f0-467b-9375-4eab8c181f1b\") " pod="openshift-marketplace/community-operators-78v77" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.925270 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7386e430-c5f0-467b-9375-4eab8c181f1b-catalog-content\") pod \"community-operators-78v77\" (UID: \"7386e430-c5f0-467b-9375-4eab8c181f1b\") " pod="openshift-marketplace/community-operators-78v77" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.925294 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad5932b-d8a0-4ca7-ad78-ae817fbc3be6-utilities\") pod \"certified-operators-zf6ds\" (UID: \"dad5932b-d8a0-4ca7-ad78-ae817fbc3be6\") " pod="openshift-marketplace/certified-operators-zf6ds" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.925373 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad5932b-d8a0-4ca7-ad78-ae817fbc3be6-catalog-content\") pod \"certified-operators-zf6ds\" (UID: \"dad5932b-d8a0-4ca7-ad78-ae817fbc3be6\") " pod="openshift-marketplace/certified-operators-zf6ds" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.925402 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:54 crc kubenswrapper[5109]: E0217 00:10:54.925734 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:55.425723212 +0000 UTC m=+126.757277970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.926082 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7386e430-c5f0-467b-9375-4eab8c181f1b-utilities\") pod \"community-operators-78v77\" (UID: \"7386e430-c5f0-467b-9375-4eab8c181f1b\") " pod="openshift-marketplace/community-operators-78v77" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.926561 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7386e430-c5f0-467b-9375-4eab8c181f1b-catalog-content\") pod \"community-operators-78v77\" (UID: \"7386e430-c5f0-467b-9375-4eab8c181f1b\") " pod="openshift-marketplace/community-operators-78v77" Feb 17 00:10:54 crc kubenswrapper[5109]: I0217 00:10:54.947537 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dnzv\" (UniqueName: \"kubernetes.io/projected/7386e430-c5f0-467b-9375-4eab8c181f1b-kube-api-access-4dnzv\") pod \"community-operators-78v77\" (UID: \"7386e430-c5f0-467b-9375-4eab8c181f1b\") " pod="openshift-marketplace/community-operators-78v77" Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.026039 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:55 crc kubenswrapper[5109]: E0217 00:10:55.026418 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:55.526402306 +0000 UTC m=+126.857957064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.026476 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad5932b-d8a0-4ca7-ad78-ae817fbc3be6-catalog-content\") pod \"certified-operators-zf6ds\" (UID: \"dad5932b-d8a0-4ca7-ad78-ae817fbc3be6\") " pod="openshift-marketplace/certified-operators-zf6ds" Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.026507 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.026572 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q6xkq\" (UniqueName: \"kubernetes.io/projected/dad5932b-d8a0-4ca7-ad78-ae817fbc3be6-kube-api-access-q6xkq\") pod \"certified-operators-zf6ds\" (UID: \"dad5932b-d8a0-4ca7-ad78-ae817fbc3be6\") " pod="openshift-marketplace/certified-operators-zf6ds" Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.026624 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad5932b-d8a0-4ca7-ad78-ae817fbc3be6-utilities\") pod \"certified-operators-zf6ds\" (UID: \"dad5932b-d8a0-4ca7-ad78-ae817fbc3be6\") " pod="openshift-marketplace/certified-operators-zf6ds" Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.026994 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad5932b-d8a0-4ca7-ad78-ae817fbc3be6-utilities\") pod \"certified-operators-zf6ds\" (UID: \"dad5932b-d8a0-4ca7-ad78-ae817fbc3be6\") " pod="openshift-marketplace/certified-operators-zf6ds" Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.027198 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad5932b-d8a0-4ca7-ad78-ae817fbc3be6-catalog-content\") pod \"certified-operators-zf6ds\" (UID: \"dad5932b-d8a0-4ca7-ad78-ae817fbc3be6\") " pod="openshift-marketplace/certified-operators-zf6ds" Feb 17 00:10:55 crc kubenswrapper[5109]: E0217 00:10:55.027398 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:55.527391552 +0000 UTC m=+126.858946310 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.045221 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6xkq\" (UniqueName: \"kubernetes.io/projected/dad5932b-d8a0-4ca7-ad78-ae817fbc3be6-kube-api-access-q6xkq\") pod \"certified-operators-zf6ds\" (UID: \"dad5932b-d8a0-4ca7-ad78-ae817fbc3be6\") " pod="openshift-marketplace/certified-operators-zf6ds" Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.056787 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-78v77" Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.079819 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lh8gv"] Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.129437 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-xhskt" Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.129480 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-xhskt" Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.129491 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lh8gv"] Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.130289 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lh8gv" Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.132358 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:55 crc kubenswrapper[5109]: E0217 00:10:55.132660 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:55.632646387 +0000 UTC m=+126.964201145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.134990 5109 patch_prober.go:28] interesting pod/console-64d44f6ddf-xhskt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.135056 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-xhskt" podUID="4aa0e237-cb03-44d4-bf30-949ab25f2e12" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.234682 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.234769 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c7999b4-952b-46c2-8381-459a7524cd88-catalog-content\") pod \"community-operators-lh8gv\" (UID: \"7c7999b4-952b-46c2-8381-459a7524cd88\") " pod="openshift-marketplace/community-operators-lh8gv" Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.234851 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tjhg\" (UniqueName: \"kubernetes.io/projected/7c7999b4-952b-46c2-8381-459a7524cd88-kube-api-access-9tjhg\") pod \"community-operators-lh8gv\" (UID: \"7c7999b4-952b-46c2-8381-459a7524cd88\") " pod="openshift-marketplace/community-operators-lh8gv" Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.234903 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c7999b4-952b-46c2-8381-459a7524cd88-utilities\") pod \"community-operators-lh8gv\" (UID: \"7c7999b4-952b-46c2-8381-459a7524cd88\") " pod="openshift-marketplace/community-operators-lh8gv" Feb 17 00:10:55 crc kubenswrapper[5109]: E0217 00:10:55.237279 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:55.737258405 +0000 UTC m=+127.068813223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.245887 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zf6ds" Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.336395 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.336621 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c7999b4-952b-46c2-8381-459a7524cd88-utilities\") pod \"community-operators-lh8gv\" (UID: \"7c7999b4-952b-46c2-8381-459a7524cd88\") " pod="openshift-marketplace/community-operators-lh8gv" Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.336767 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c7999b4-952b-46c2-8381-459a7524cd88-catalog-content\") pod \"community-operators-lh8gv\" (UID: \"7c7999b4-952b-46c2-8381-459a7524cd88\") " pod="openshift-marketplace/community-operators-lh8gv" Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.336811 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9tjhg\" (UniqueName: \"kubernetes.io/projected/7c7999b4-952b-46c2-8381-459a7524cd88-kube-api-access-9tjhg\") pod \"community-operators-lh8gv\" (UID: \"7c7999b4-952b-46c2-8381-459a7524cd88\") " pod="openshift-marketplace/community-operators-lh8gv" Feb 17 00:10:55 crc kubenswrapper[5109]: E0217 00:10:55.337485 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:55.837457177 +0000 UTC m=+127.169011935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.337745 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c7999b4-952b-46c2-8381-459a7524cd88-utilities\") pod \"community-operators-lh8gv\" (UID: \"7c7999b4-952b-46c2-8381-459a7524cd88\") " pod="openshift-marketplace/community-operators-lh8gv" Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.337861 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c7999b4-952b-46c2-8381-459a7524cd88-catalog-content\") pod \"community-operators-lh8gv\" (UID: \"7c7999b4-952b-46c2-8381-459a7524cd88\") " pod="openshift-marketplace/community-operators-lh8gv" Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.364586 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tjhg\" (UniqueName: \"kubernetes.io/projected/7c7999b4-952b-46c2-8381-459a7524cd88-kube-api-access-9tjhg\") pod \"community-operators-lh8gv\" (UID: \"7c7999b4-952b-46c2-8381-459a7524cd88\") " pod="openshift-marketplace/community-operators-lh8gv" Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.389679 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-78v77"] Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.437657 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:55 crc kubenswrapper[5109]: E0217 00:10:55.438144 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:55.938089611 +0000 UTC m=+127.269644369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.445744 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vjl8l"] Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.453723 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"8e89f769-6dbf-45de-a435-c5c7439b06d0","Type":"ContainerStarted","Data":"fd63091449e018182b3ab05407db97b90384df2338bb039b0e652f88c610dc28"} Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.453785 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"8e89f769-6dbf-45de-a435-c5c7439b06d0","Type":"ContainerStarted","Data":"84c6842f20024b8f95493fcdb1c29fde96ef17b8c6e2c3f6dea5de7cf243ff7e"} Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.455840 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lh8gv" Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.488555 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=1.488539436 podStartE2EDuration="1.488539436s" podCreationTimestamp="2026-02-17 00:10:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:55.486994316 +0000 UTC m=+126.818549074" watchObservedRunningTime="2026-02-17 00:10:55.488539436 +0000 UTC m=+126.820094194" Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.495434 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"23dfaa7e-0474-427a-812c-1131a2015031","Type":"ContainerStarted","Data":"4ac557fe2abc5ec6a493bf2626f78161581c7f5445dfe18e5183fc0df349ba5f"} Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.495492 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"23dfaa7e-0474-427a-812c-1131a2015031","Type":"ContainerStarted","Data":"99f461d4dd0c18be6db15f4b00d50abcb8f6bf2716e0453b9c378f1c2130a0f2"} Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.495506 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78v77" event={"ID":"7386e430-c5f0-467b-9375-4eab8c181f1b","Type":"ContainerStarted","Data":"4acca9059eb8174ce0c9145617dccc092ebcc6dc5c15c86185c92bc734f36d2c"} Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.506715 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=1.5066962130000001 podStartE2EDuration="1.506696213s" podCreationTimestamp="2026-02-17 00:10:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:55.503303984 +0000 UTC m=+126.834858742" watchObservedRunningTime="2026-02-17 00:10:55.506696213 +0000 UTC m=+126.838250971" Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.539864 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:55 crc kubenswrapper[5109]: E0217 00:10:55.541653 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:56.041575179 +0000 UTC m=+127.373129937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.567274 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zf6ds"] Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.642065 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:55 crc kubenswrapper[5109]: E0217 00:10:55.642441 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:56.142424499 +0000 UTC m=+127.473979257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.736872 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lh8gv"] Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.743612 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:55 crc kubenswrapper[5109]: E0217 00:10:55.743820 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:56.243784881 +0000 UTC m=+127.575339649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.744056 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:55 crc kubenswrapper[5109]: E0217 00:10:55.744675 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:56.244657304 +0000 UTC m=+127.576212142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.809349 5109 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-rw5p4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 00:10:55 crc kubenswrapper[5109]: [-]has-synced failed: reason withheld Feb 17 00:10:55 crc kubenswrapper[5109]: [+]process-running ok Feb 17 00:10:55 crc kubenswrapper[5109]: healthz check failed Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.809402 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" podUID="a4d85031-8c4b-4260-9279-77b7e3a7d75d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.845539 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:55 crc kubenswrapper[5109]: E0217 00:10:55.845859 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:56.345842732 +0000 UTC m=+127.677397490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:55 crc kubenswrapper[5109]: I0217 00:10:55.947024 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:55 crc kubenswrapper[5109]: E0217 00:10:55.947398 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:56.44738617 +0000 UTC m=+127.778940918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.047876 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:56 crc kubenswrapper[5109]: E0217 00:10:56.048048 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:56.548020713 +0000 UTC m=+127.879575471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.049212 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:56 crc kubenswrapper[5109]: E0217 00:10:56.049583 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:56.549571474 +0000 UTC m=+127.881126232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.115764 5109 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.150836 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:56 crc kubenswrapper[5109]: E0217 00:10:56.151003 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:56.650977998 +0000 UTC m=+127.982532756 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.151228 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:56 crc kubenswrapper[5109]: E0217 00:10:56.151534 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:56.651523982 +0000 UTC m=+127.983078750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.253124 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:56 crc kubenswrapper[5109]: E0217 00:10:56.253513 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:56.753494671 +0000 UTC m=+128.085049439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.354985 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:56 crc kubenswrapper[5109]: E0217 00:10:56.355571 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:56.855548142 +0000 UTC m=+128.187102900 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.456735 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:56 crc kubenswrapper[5109]: E0217 00:10:56.457195 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-17 00:10:56.957175111 +0000 UTC m=+128.288729869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.469802 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5p9cp"] Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.477804 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5p9cp" Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.480390 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.482520 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5p9cp"] Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.496334 5109 generic.go:358] "Generic (PLEG): container finished" podID="7386e430-c5f0-467b-9375-4eab8c181f1b" containerID="c23057643e0ccf72f8e2f172c33b318f9fe9c3475d148d799353d8d3da966195" exitCode=0 Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.496419 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78v77" event={"ID":"7386e430-c5f0-467b-9375-4eab8c181f1b","Type":"ContainerDied","Data":"c23057643e0ccf72f8e2f172c33b318f9fe9c3475d148d799353d8d3da966195"} Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.498648 5109 generic.go:358] "Generic (PLEG): container finished" podID="74a26206-1199-4cf4-912a-fa5e03a96713" containerID="beba75b564eb33c46b2453575646c5df63b0d59aadee0250d50c23815a59ab8c" exitCode=0 Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.498716 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjl8l" event={"ID":"74a26206-1199-4cf4-912a-fa5e03a96713","Type":"ContainerDied","Data":"beba75b564eb33c46b2453575646c5df63b0d59aadee0250d50c23815a59ab8c"} Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.498735 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjl8l" event={"ID":"74a26206-1199-4cf4-912a-fa5e03a96713","Type":"ContainerStarted","Data":"bb45d32b2e8e19508d1dd79e4e46edbb6cba567a24a26228bc4ec2223fb31ae8"} Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.500375 5109 generic.go:358] "Generic (PLEG): container finished" podID="8e89f769-6dbf-45de-a435-c5c7439b06d0" containerID="fd63091449e018182b3ab05407db97b90384df2338bb039b0e652f88c610dc28" exitCode=0 Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.500444 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"8e89f769-6dbf-45de-a435-c5c7439b06d0","Type":"ContainerDied","Data":"fd63091449e018182b3ab05407db97b90384df2338bb039b0e652f88c610dc28"} Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.502507 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-546f6" event={"ID":"693f1ea3-457d-4233-844f-1125adaa9fa9","Type":"ContainerStarted","Data":"90f9d0cfbf2b08bc912ae2ddefae792dcaaf1120d5b808e683637b4c1863f5b9"} Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.502530 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-546f6" event={"ID":"693f1ea3-457d-4233-844f-1125adaa9fa9","Type":"ContainerStarted","Data":"34484b8c6af778849c34e28db08c7192eeb8fa08398fedd55821fe2b58640f3d"} Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.502540 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-546f6" event={"ID":"693f1ea3-457d-4233-844f-1125adaa9fa9","Type":"ContainerStarted","Data":"fbf6a1a4f7485779dfe09dd2cef80d21a70618596a3ff9b5a93ea79fbc3f57e5"} Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.503733 5109 generic.go:358] "Generic (PLEG): container finished" podID="7c7999b4-952b-46c2-8381-459a7524cd88" containerID="c50635ddaaa15696b411aa162bef7c398806fe504a883ea9fa24d753f9ada22c" exitCode=0 Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.503814 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lh8gv" event={"ID":"7c7999b4-952b-46c2-8381-459a7524cd88","Type":"ContainerDied","Data":"c50635ddaaa15696b411aa162bef7c398806fe504a883ea9fa24d753f9ada22c"} Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.503839 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lh8gv" event={"ID":"7c7999b4-952b-46c2-8381-459a7524cd88","Type":"ContainerStarted","Data":"f09b4fa447f1b47a845f73b292607908ad48e53f1db30e68c49228c8d61db398"} Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.505458 5109 generic.go:358] "Generic (PLEG): container finished" podID="dad5932b-d8a0-4ca7-ad78-ae817fbc3be6" containerID="4db0df57ab76b8b0d43b8ad37679ffb2993b4596dd7e6f053dfb6f0a29e3f7c7" exitCode=0 Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.505526 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zf6ds" event={"ID":"dad5932b-d8a0-4ca7-ad78-ae817fbc3be6","Type":"ContainerDied","Data":"4db0df57ab76b8b0d43b8ad37679ffb2993b4596dd7e6f053dfb6f0a29e3f7c7"} Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.505541 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zf6ds" event={"ID":"dad5932b-d8a0-4ca7-ad78-ae817fbc3be6","Type":"ContainerStarted","Data":"e9d496cae2c2a977ee39225a5aeac7071fb0e40edd4c9d193f93fd9ac74c6dc7"} Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.529302 5109 patch_prober.go:28] interesting pod/downloads-747b44746d-5pqmv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.529457 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-5pqmv" podUID="abd1baa1-4b4c-459b-b487-5dd283fe0ad9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.537359 5109 generic.go:358] "Generic (PLEG): container finished" podID="23dfaa7e-0474-427a-812c-1131a2015031" containerID="4ac557fe2abc5ec6a493bf2626f78161581c7f5445dfe18e5183fc0df349ba5f" exitCode=0 Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.537418 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"23dfaa7e-0474-427a-812c-1131a2015031","Type":"ContainerDied","Data":"4ac557fe2abc5ec6a493bf2626f78161581c7f5445dfe18e5183fc0df349ba5f"} Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.569103 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb9e5613-b1ca-483f-8efe-7c150933934b-catalog-content\") pod \"redhat-marketplace-5p9cp\" (UID: \"fb9e5613-b1ca-483f-8efe-7c150933934b\") " pod="openshift-marketplace/redhat-marketplace-5p9cp" Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.569255 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb9e5613-b1ca-483f-8efe-7c150933934b-utilities\") pod \"redhat-marketplace-5p9cp\" (UID: \"fb9e5613-b1ca-483f-8efe-7c150933934b\") " pod="openshift-marketplace/redhat-marketplace-5p9cp" Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.569316 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2gxz\" (UniqueName: \"kubernetes.io/projected/fb9e5613-b1ca-483f-8efe-7c150933934b-kube-api-access-b2gxz\") pod \"redhat-marketplace-5p9cp\" (UID: \"fb9e5613-b1ca-483f-8efe-7c150933934b\") " pod="openshift-marketplace/redhat-marketplace-5p9cp" Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.569450 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:56 crc kubenswrapper[5109]: E0217 00:10:56.571251 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-17 00:10:57.071237317 +0000 UTC m=+128.402792065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-6jz6g" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.622607 5109 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-17T00:10:56.115790683Z","UUID":"161fef87-d3bc-4bb0-8dab-ad330d29745a","Handler":null,"Name":"","Endpoint":""} Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.637524 5109 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.637559 5109 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.670159 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.670376 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb9e5613-b1ca-483f-8efe-7c150933934b-catalog-content\") pod \"redhat-marketplace-5p9cp\" (UID: \"fb9e5613-b1ca-483f-8efe-7c150933934b\") " pod="openshift-marketplace/redhat-marketplace-5p9cp" Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.670460 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb9e5613-b1ca-483f-8efe-7c150933934b-utilities\") pod \"redhat-marketplace-5p9cp\" (UID: \"fb9e5613-b1ca-483f-8efe-7c150933934b\") " pod="openshift-marketplace/redhat-marketplace-5p9cp" Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.670905 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb9e5613-b1ca-483f-8efe-7c150933934b-catalog-content\") pod \"redhat-marketplace-5p9cp\" (UID: \"fb9e5613-b1ca-483f-8efe-7c150933934b\") " pod="openshift-marketplace/redhat-marketplace-5p9cp" Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.670929 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb9e5613-b1ca-483f-8efe-7c150933934b-utilities\") pod \"redhat-marketplace-5p9cp\" (UID: \"fb9e5613-b1ca-483f-8efe-7c150933934b\") " pod="openshift-marketplace/redhat-marketplace-5p9cp" Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.670970 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b2gxz\" (UniqueName: \"kubernetes.io/projected/fb9e5613-b1ca-483f-8efe-7c150933934b-kube-api-access-b2gxz\") pod \"redhat-marketplace-5p9cp\" (UID: \"fb9e5613-b1ca-483f-8efe-7c150933934b\") " pod="openshift-marketplace/redhat-marketplace-5p9cp" Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.686502 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-546f6" podStartSLOduration=12.686485565 podStartE2EDuration="12.686485565s" podCreationTimestamp="2026-02-17 00:10:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:56.667500816 +0000 UTC m=+127.999055574" watchObservedRunningTime="2026-02-17 00:10:56.686485565 +0000 UTC m=+128.018040323" Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.693736 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.705168 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2gxz\" (UniqueName: \"kubernetes.io/projected/fb9e5613-b1ca-483f-8efe-7c150933934b-kube-api-access-b2gxz\") pod \"redhat-marketplace-5p9cp\" (UID: \"fb9e5613-b1ca-483f-8efe-7c150933934b\") " pod="openshift-marketplace/redhat-marketplace-5p9cp" Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.772151 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.792173 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5p9cp" Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.810809 5109 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-rw5p4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 00:10:56 crc kubenswrapper[5109]: [-]has-synced failed: reason withheld Feb 17 00:10:56 crc kubenswrapper[5109]: [+]process-running ok Feb 17 00:10:56 crc kubenswrapper[5109]: healthz check failed Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.811045 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" podUID="a4d85031-8c4b-4260-9279-77b7e3a7d75d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.841930 5109 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.842207 5109 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.859979 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-6jz6g\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.871393 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p4s9k"] Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.884311 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4s9k"] Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.884490 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4s9k" Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.937137 5109 ???:1] "http: TLS handshake error from 192.168.126.11:55270: no serving certificate available for the kubelet" Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.975798 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7phv\" (UniqueName: \"kubernetes.io/projected/e9b99043-6adb-499d-bcec-e0003af60fed-kube-api-access-q7phv\") pod \"redhat-marketplace-p4s9k\" (UID: \"e9b99043-6adb-499d-bcec-e0003af60fed\") " pod="openshift-marketplace/redhat-marketplace-p4s9k" Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.975878 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b99043-6adb-499d-bcec-e0003af60fed-utilities\") pod \"redhat-marketplace-p4s9k\" (UID: \"e9b99043-6adb-499d-bcec-e0003af60fed\") " pod="openshift-marketplace/redhat-marketplace-p4s9k" Feb 17 00:10:56 crc kubenswrapper[5109]: I0217 00:10:56.975919 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b99043-6adb-499d-bcec-e0003af60fed-catalog-content\") pod \"redhat-marketplace-p4s9k\" (UID: \"e9b99043-6adb-499d-bcec-e0003af60fed\") " pod="openshift-marketplace/redhat-marketplace-p4s9k" Feb 17 00:10:57 crc kubenswrapper[5109]: I0217 00:10:57.078835 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b99043-6adb-499d-bcec-e0003af60fed-utilities\") pod \"redhat-marketplace-p4s9k\" (UID: \"e9b99043-6adb-499d-bcec-e0003af60fed\") " pod="openshift-marketplace/redhat-marketplace-p4s9k" Feb 17 00:10:57 crc kubenswrapper[5109]: I0217 00:10:57.078895 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b99043-6adb-499d-bcec-e0003af60fed-catalog-content\") pod \"redhat-marketplace-p4s9k\" (UID: \"e9b99043-6adb-499d-bcec-e0003af60fed\") " pod="openshift-marketplace/redhat-marketplace-p4s9k" Feb 17 00:10:57 crc kubenswrapper[5109]: I0217 00:10:57.078973 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q7phv\" (UniqueName: \"kubernetes.io/projected/e9b99043-6adb-499d-bcec-e0003af60fed-kube-api-access-q7phv\") pod \"redhat-marketplace-p4s9k\" (UID: \"e9b99043-6adb-499d-bcec-e0003af60fed\") " pod="openshift-marketplace/redhat-marketplace-p4s9k" Feb 17 00:10:57 crc kubenswrapper[5109]: I0217 00:10:57.079917 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b99043-6adb-499d-bcec-e0003af60fed-utilities\") pod \"redhat-marketplace-p4s9k\" (UID: \"e9b99043-6adb-499d-bcec-e0003af60fed\") " pod="openshift-marketplace/redhat-marketplace-p4s9k" Feb 17 00:10:57 crc kubenswrapper[5109]: I0217 00:10:57.080092 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b99043-6adb-499d-bcec-e0003af60fed-catalog-content\") pod \"redhat-marketplace-p4s9k\" (UID: \"e9b99043-6adb-499d-bcec-e0003af60fed\") " pod="openshift-marketplace/redhat-marketplace-p4s9k" Feb 17 00:10:57 crc kubenswrapper[5109]: I0217 00:10:57.098987 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5p9cp"] Feb 17 00:10:57 crc kubenswrapper[5109]: W0217 00:10:57.118049 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb9e5613_b1ca_483f_8efe_7c150933934b.slice/crio-46353b6f310473e4d7470a9ece43f2cf920d3fc1d91bb88d3020c9aed533ce0c WatchSource:0}: Error finding container 46353b6f310473e4d7470a9ece43f2cf920d3fc1d91bb88d3020c9aed533ce0c: Status 404 returned error can't find the container with id 46353b6f310473e4d7470a9ece43f2cf920d3fc1d91bb88d3020c9aed533ce0c Feb 17 00:10:57 crc kubenswrapper[5109]: I0217 00:10:57.122187 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7phv\" (UniqueName: \"kubernetes.io/projected/e9b99043-6adb-499d-bcec-e0003af60fed-kube-api-access-q7phv\") pod \"redhat-marketplace-p4s9k\" (UID: \"e9b99043-6adb-499d-bcec-e0003af60fed\") " pod="openshift-marketplace/redhat-marketplace-p4s9k" Feb 17 00:10:57 crc kubenswrapper[5109]: I0217 00:10:57.122535 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Feb 17 00:10:57 crc kubenswrapper[5109]: I0217 00:10:57.125506 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:57 crc kubenswrapper[5109]: I0217 00:10:57.198146 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4s9k" Feb 17 00:10:57 crc kubenswrapper[5109]: I0217 00:10:57.368895 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-6jz6g"] Feb 17 00:10:57 crc kubenswrapper[5109]: I0217 00:10:57.476017 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Feb 17 00:10:57 crc kubenswrapper[5109]: I0217 00:10:57.545966 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" event={"ID":"9c5b02a2-437a-46c3-b4ce-d856b61053f6","Type":"ContainerStarted","Data":"a589723eb4ed0916bdf38360cd9240be2b1cc1e993fd29f9009e9233a5a253ae"} Feb 17 00:10:57 crc kubenswrapper[5109]: I0217 00:10:57.548434 5109 generic.go:358] "Generic (PLEG): container finished" podID="fb9e5613-b1ca-483f-8efe-7c150933934b" containerID="84ef8c1df9aa4296a5fc3aa0d01d363836fa181d36ae0b735b7de2e644789027" exitCode=0 Feb 17 00:10:57 crc kubenswrapper[5109]: I0217 00:10:57.548621 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5p9cp" event={"ID":"fb9e5613-b1ca-483f-8efe-7c150933934b","Type":"ContainerDied","Data":"84ef8c1df9aa4296a5fc3aa0d01d363836fa181d36ae0b735b7de2e644789027"} Feb 17 00:10:57 crc kubenswrapper[5109]: I0217 00:10:57.548663 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5p9cp" event={"ID":"fb9e5613-b1ca-483f-8efe-7c150933934b","Type":"ContainerStarted","Data":"46353b6f310473e4d7470a9ece43f2cf920d3fc1d91bb88d3020c9aed533ce0c"} Feb 17 00:10:57 crc kubenswrapper[5109]: I0217 00:10:57.739744 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4s9k"] Feb 17 00:10:57 crc kubenswrapper[5109]: I0217 00:10:57.807053 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" Feb 17 00:10:57 crc kubenswrapper[5109]: I0217 00:10:57.810036 5109 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-rw5p4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 00:10:57 crc kubenswrapper[5109]: [-]has-synced failed: reason withheld Feb 17 00:10:57 crc kubenswrapper[5109]: [+]process-running ok Feb 17 00:10:57 crc kubenswrapper[5109]: healthz check failed Feb 17 00:10:57 crc kubenswrapper[5109]: I0217 00:10:57.810095 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" podUID="a4d85031-8c4b-4260-9279-77b7e3a7d75d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 00:10:57 crc kubenswrapper[5109]: I0217 00:10:57.855288 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:57 crc kubenswrapper[5109]: I0217 00:10:57.861839 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-qsvff" Feb 17 00:10:57 crc kubenswrapper[5109]: I0217 00:10:57.871211 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-trn6m"] Feb 17 00:10:57 crc kubenswrapper[5109]: I0217 00:10:57.901678 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-trn6m" Feb 17 00:10:57 crc kubenswrapper[5109]: I0217 00:10:57.910082 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 17 00:10:57 crc kubenswrapper[5109]: I0217 00:10:57.913077 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-trn6m"] Feb 17 00:10:57 crc kubenswrapper[5109]: I0217 00:10:57.953972 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 17 00:10:57 crc kubenswrapper[5109]: I0217 00:10:57.960790 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.007394 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e89f769-6dbf-45de-a435-c5c7439b06d0-kubelet-dir\") pod \"8e89f769-6dbf-45de-a435-c5c7439b06d0\" (UID: \"8e89f769-6dbf-45de-a435-c5c7439b06d0\") " Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.007520 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e89f769-6dbf-45de-a435-c5c7439b06d0-kube-api-access\") pod \"8e89f769-6dbf-45de-a435-c5c7439b06d0\" (UID: \"8e89f769-6dbf-45de-a435-c5c7439b06d0\") " Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.007753 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/387ec0da-dcb1-4001-8439-5793c9384015-utilities\") pod \"redhat-operators-trn6m\" (UID: \"387ec0da-dcb1-4001-8439-5793c9384015\") " pod="openshift-marketplace/redhat-operators-trn6m" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.007805 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/387ec0da-dcb1-4001-8439-5793c9384015-catalog-content\") pod \"redhat-operators-trn6m\" (UID: \"387ec0da-dcb1-4001-8439-5793c9384015\") " pod="openshift-marketplace/redhat-operators-trn6m" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.007820 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t4hv\" (UniqueName: \"kubernetes.io/projected/387ec0da-dcb1-4001-8439-5793c9384015-kube-api-access-6t4hv\") pod \"redhat-operators-trn6m\" (UID: \"387ec0da-dcb1-4001-8439-5793c9384015\") " pod="openshift-marketplace/redhat-operators-trn6m" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.007955 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e89f769-6dbf-45de-a435-c5c7439b06d0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8e89f769-6dbf-45de-a435-c5c7439b06d0" (UID: "8e89f769-6dbf-45de-a435-c5c7439b06d0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.021981 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e89f769-6dbf-45de-a435-c5c7439b06d0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8e89f769-6dbf-45de-a435-c5c7439b06d0" (UID: "8e89f769-6dbf-45de-a435-c5c7439b06d0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.110070 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23dfaa7e-0474-427a-812c-1131a2015031-kube-api-access\") pod \"23dfaa7e-0474-427a-812c-1131a2015031\" (UID: \"23dfaa7e-0474-427a-812c-1131a2015031\") " Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.110264 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23dfaa7e-0474-427a-812c-1131a2015031-kubelet-dir\") pod \"23dfaa7e-0474-427a-812c-1131a2015031\" (UID: \"23dfaa7e-0474-427a-812c-1131a2015031\") " Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.110503 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/387ec0da-dcb1-4001-8439-5793c9384015-utilities\") pod \"redhat-operators-trn6m\" (UID: \"387ec0da-dcb1-4001-8439-5793c9384015\") " pod="openshift-marketplace/redhat-operators-trn6m" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.110654 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/387ec0da-dcb1-4001-8439-5793c9384015-catalog-content\") pod \"redhat-operators-trn6m\" (UID: \"387ec0da-dcb1-4001-8439-5793c9384015\") " pod="openshift-marketplace/redhat-operators-trn6m" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.110713 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6t4hv\" (UniqueName: \"kubernetes.io/projected/387ec0da-dcb1-4001-8439-5793c9384015-kube-api-access-6t4hv\") pod \"redhat-operators-trn6m\" (UID: \"387ec0da-dcb1-4001-8439-5793c9384015\") " pod="openshift-marketplace/redhat-operators-trn6m" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.110944 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e89f769-6dbf-45de-a435-c5c7439b06d0-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.110968 5109 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e89f769-6dbf-45de-a435-c5c7439b06d0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.111780 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23dfaa7e-0474-427a-812c-1131a2015031-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "23dfaa7e-0474-427a-812c-1131a2015031" (UID: "23dfaa7e-0474-427a-812c-1131a2015031"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.112298 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/387ec0da-dcb1-4001-8439-5793c9384015-utilities\") pod \"redhat-operators-trn6m\" (UID: \"387ec0da-dcb1-4001-8439-5793c9384015\") " pod="openshift-marketplace/redhat-operators-trn6m" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.112388 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/387ec0da-dcb1-4001-8439-5793c9384015-catalog-content\") pod \"redhat-operators-trn6m\" (UID: \"387ec0da-dcb1-4001-8439-5793c9384015\") " pod="openshift-marketplace/redhat-operators-trn6m" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.115897 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23dfaa7e-0474-427a-812c-1131a2015031-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "23dfaa7e-0474-427a-812c-1131a2015031" (UID: "23dfaa7e-0474-427a-812c-1131a2015031"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.131072 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t4hv\" (UniqueName: \"kubernetes.io/projected/387ec0da-dcb1-4001-8439-5793c9384015-kube-api-access-6t4hv\") pod \"redhat-operators-trn6m\" (UID: \"387ec0da-dcb1-4001-8439-5793c9384015\") " pod="openshift-marketplace/redhat-operators-trn6m" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.212082 5109 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23dfaa7e-0474-427a-812c-1131a2015031-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.212109 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23dfaa7e-0474-427a-812c-1131a2015031-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.272860 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-247nb"] Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.274897 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="23dfaa7e-0474-427a-812c-1131a2015031" containerName="pruner" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.274924 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="23dfaa7e-0474-427a-812c-1131a2015031" containerName="pruner" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.274938 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8e89f769-6dbf-45de-a435-c5c7439b06d0" containerName="pruner" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.274946 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e89f769-6dbf-45de-a435-c5c7439b06d0" containerName="pruner" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.275062 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="8e89f769-6dbf-45de-a435-c5c7439b06d0" containerName="pruner" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.275077 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="23dfaa7e-0474-427a-812c-1131a2015031" containerName="pruner" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.278470 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-trn6m" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.596834 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-247nb"] Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.597089 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-trn6m"] Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.597020 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-247nb" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.603540 5109 generic.go:358] "Generic (PLEG): container finished" podID="e9b99043-6adb-499d-bcec-e0003af60fed" containerID="a6474468d5f834ed0b5af1d3b321d7378481b3403bc88aacf65293efdc5fa9f7" exitCode=0 Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.603667 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4s9k" event={"ID":"e9b99043-6adb-499d-bcec-e0003af60fed","Type":"ContainerDied","Data":"a6474468d5f834ed0b5af1d3b321d7378481b3403bc88aacf65293efdc5fa9f7"} Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.603695 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4s9k" event={"ID":"e9b99043-6adb-499d-bcec-e0003af60fed","Type":"ContainerStarted","Data":"8e47ffd8c06d5c488c757eed0fbeaab611084fc6876e96e67e2da877d10d78eb"} Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.606797 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.606852 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"8e89f769-6dbf-45de-a435-c5c7439b06d0","Type":"ContainerDied","Data":"84c6842f20024b8f95493fcdb1c29fde96ef17b8c6e2c3f6dea5de7cf243ff7e"} Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.606881 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84c6842f20024b8f95493fcdb1c29fde96ef17b8c6e2c3f6dea5de7cf243ff7e" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.610610 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" event={"ID":"9c5b02a2-437a-46c3-b4ce-d856b61053f6","Type":"ContainerStarted","Data":"b6c623978a305678878d886310b8580773fc937299ac17e3b9e76bb263542e3d"} Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.625839 5109 generic.go:358] "Generic (PLEG): container finished" podID="2ffee5ff-84cf-4dfa-816b-ca1f8b763069" containerID="af36af24e18a33042499821addd1b3c7ad154110236003192e51f0e4a05189ad" exitCode=0 Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.625984 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-rcf5s" event={"ID":"2ffee5ff-84cf-4dfa-816b-ca1f8b763069","Type":"ContainerDied","Data":"af36af24e18a33042499821addd1b3c7ad154110236003192e51f0e4a05189ad"} Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.638299 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.640345 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"23dfaa7e-0474-427a-812c-1131a2015031","Type":"ContainerDied","Data":"99f461d4dd0c18be6db15f4b00d50abcb8f6bf2716e0453b9c378f1c2130a0f2"} Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.640385 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99f461d4dd0c18be6db15f4b00d50abcb8f6bf2716e0453b9c378f1c2130a0f2" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.727069 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.737118 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0800be8-a032-4a80-b5f3-6f7b11ef439e-utilities\") pod \"redhat-operators-247nb\" (UID: \"a0800be8-a032-4a80-b5f3-6f7b11ef439e\") " pod="openshift-marketplace/redhat-operators-247nb" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.737198 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbwrc\" (UniqueName: \"kubernetes.io/projected/a0800be8-a032-4a80-b5f3-6f7b11ef439e-kube-api-access-rbwrc\") pod \"redhat-operators-247nb\" (UID: \"a0800be8-a032-4a80-b5f3-6f7b11ef439e\") " pod="openshift-marketplace/redhat-operators-247nb" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.737425 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0800be8-a032-4a80-b5f3-6f7b11ef439e-catalog-content\") pod \"redhat-operators-247nb\" (UID: \"a0800be8-a032-4a80-b5f3-6f7b11ef439e\") " pod="openshift-marketplace/redhat-operators-247nb" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.809482 5109 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-rw5p4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 00:10:58 crc kubenswrapper[5109]: [-]has-synced failed: reason withheld Feb 17 00:10:58 crc kubenswrapper[5109]: [+]process-running ok Feb 17 00:10:58 crc kubenswrapper[5109]: healthz check failed Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.809558 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" podUID="a4d85031-8c4b-4260-9279-77b7e3a7d75d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.839210 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0800be8-a032-4a80-b5f3-6f7b11ef439e-utilities\") pod \"redhat-operators-247nb\" (UID: \"a0800be8-a032-4a80-b5f3-6f7b11ef439e\") " pod="openshift-marketplace/redhat-operators-247nb" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.839928 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0800be8-a032-4a80-b5f3-6f7b11ef439e-utilities\") pod \"redhat-operators-247nb\" (UID: \"a0800be8-a032-4a80-b5f3-6f7b11ef439e\") " pod="openshift-marketplace/redhat-operators-247nb" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.839856 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rbwrc\" (UniqueName: \"kubernetes.io/projected/a0800be8-a032-4a80-b5f3-6f7b11ef439e-kube-api-access-rbwrc\") pod \"redhat-operators-247nb\" (UID: \"a0800be8-a032-4a80-b5f3-6f7b11ef439e\") " pod="openshift-marketplace/redhat-operators-247nb" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.840773 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0800be8-a032-4a80-b5f3-6f7b11ef439e-catalog-content\") pod \"redhat-operators-247nb\" (UID: \"a0800be8-a032-4a80-b5f3-6f7b11ef439e\") " pod="openshift-marketplace/redhat-operators-247nb" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.846624 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0800be8-a032-4a80-b5f3-6f7b11ef439e-catalog-content\") pod \"redhat-operators-247nb\" (UID: \"a0800be8-a032-4a80-b5f3-6f7b11ef439e\") " pod="openshift-marketplace/redhat-operators-247nb" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.861969 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbwrc\" (UniqueName: \"kubernetes.io/projected/a0800be8-a032-4a80-b5f3-6f7b11ef439e-kube-api-access-rbwrc\") pod \"redhat-operators-247nb\" (UID: \"a0800be8-a032-4a80-b5f3-6f7b11ef439e\") " pod="openshift-marketplace/redhat-operators-247nb" Feb 17 00:10:58 crc kubenswrapper[5109]: I0217 00:10:58.937452 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-247nb" Feb 17 00:10:59 crc kubenswrapper[5109]: I0217 00:10:59.130091 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" podStartSLOduration=108.130071924 podStartE2EDuration="1m48.130071924s" podCreationTimestamp="2026-02-17 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:10:58.752954988 +0000 UTC m=+130.084509776" watchObservedRunningTime="2026-02-17 00:10:59.130071924 +0000 UTC m=+130.461626682" Feb 17 00:10:59 crc kubenswrapper[5109]: I0217 00:10:59.132458 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-247nb"] Feb 17 00:10:59 crc kubenswrapper[5109]: W0217 00:10:59.146432 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0800be8_a032_4a80_b5f3_6f7b11ef439e.slice/crio-7033f67a40935eea5cfaa5f42412b63f68946494436fb043021a0c2cf459c1d5 WatchSource:0}: Error finding container 7033f67a40935eea5cfaa5f42412b63f68946494436fb043021a0c2cf459c1d5: Status 404 returned error can't find the container with id 7033f67a40935eea5cfaa5f42412b63f68946494436fb043021a0c2cf459c1d5 Feb 17 00:10:59 crc kubenswrapper[5109]: E0217 00:10:59.452510 5109 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="56eba437445dd741d363ddbbafd1a629d5ff6bb99cbd659a0ce57cebe6b0aec6" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 17 00:10:59 crc kubenswrapper[5109]: E0217 00:10:59.454205 5109 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="56eba437445dd741d363ddbbafd1a629d5ff6bb99cbd659a0ce57cebe6b0aec6" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 17 00:10:59 crc kubenswrapper[5109]: E0217 00:10:59.456441 5109 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="56eba437445dd741d363ddbbafd1a629d5ff6bb99cbd659a0ce57cebe6b0aec6" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 17 00:10:59 crc kubenswrapper[5109]: E0217 00:10:59.457481 5109 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-mrr4k" podUID="b3e6e84e-201d-45cb-a34e-351fcc111c55" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 17 00:10:59 crc kubenswrapper[5109]: I0217 00:10:59.645266 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-247nb" event={"ID":"a0800be8-a032-4a80-b5f3-6f7b11ef439e","Type":"ContainerStarted","Data":"7033f67a40935eea5cfaa5f42412b63f68946494436fb043021a0c2cf459c1d5"} Feb 17 00:10:59 crc kubenswrapper[5109]: I0217 00:10:59.647785 5109 generic.go:358] "Generic (PLEG): container finished" podID="387ec0da-dcb1-4001-8439-5793c9384015" containerID="3c581796a346f34f21ebfd7dcd5a563cbd175576364e1c42f627150776a079dc" exitCode=0 Feb 17 00:10:59 crc kubenswrapper[5109]: I0217 00:10:59.647948 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-trn6m" event={"ID":"387ec0da-dcb1-4001-8439-5793c9384015","Type":"ContainerDied","Data":"3c581796a346f34f21ebfd7dcd5a563cbd175576364e1c42f627150776a079dc"} Feb 17 00:10:59 crc kubenswrapper[5109]: I0217 00:10:59.647974 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-trn6m" event={"ID":"387ec0da-dcb1-4001-8439-5793c9384015","Type":"ContainerStarted","Data":"22e3b626d9e9a74ab4e62f5d417bea887ce782ab4d43508c1071360494fd1212"} Feb 17 00:10:59 crc kubenswrapper[5109]: I0217 00:10:59.809984 5109 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-rw5p4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 00:10:59 crc kubenswrapper[5109]: [-]has-synced failed: reason withheld Feb 17 00:10:59 crc kubenswrapper[5109]: [+]process-running ok Feb 17 00:10:59 crc kubenswrapper[5109]: healthz check failed Feb 17 00:10:59 crc kubenswrapper[5109]: I0217 00:10:59.810360 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" podUID="a4d85031-8c4b-4260-9279-77b7e3a7d75d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 00:10:59 crc kubenswrapper[5109]: I0217 00:10:59.936785 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-rcf5s" Feb 17 00:11:00 crc kubenswrapper[5109]: I0217 00:11:00.068349 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v65w5\" (UniqueName: \"kubernetes.io/projected/2ffee5ff-84cf-4dfa-816b-ca1f8b763069-kube-api-access-v65w5\") pod \"2ffee5ff-84cf-4dfa-816b-ca1f8b763069\" (UID: \"2ffee5ff-84cf-4dfa-816b-ca1f8b763069\") " Feb 17 00:11:00 crc kubenswrapper[5109]: I0217 00:11:00.068409 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ffee5ff-84cf-4dfa-816b-ca1f8b763069-config-volume\") pod \"2ffee5ff-84cf-4dfa-816b-ca1f8b763069\" (UID: \"2ffee5ff-84cf-4dfa-816b-ca1f8b763069\") " Feb 17 00:11:00 crc kubenswrapper[5109]: I0217 00:11:00.068659 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2ffee5ff-84cf-4dfa-816b-ca1f8b763069-secret-volume\") pod \"2ffee5ff-84cf-4dfa-816b-ca1f8b763069\" (UID: \"2ffee5ff-84cf-4dfa-816b-ca1f8b763069\") " Feb 17 00:11:00 crc kubenswrapper[5109]: I0217 00:11:00.069183 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ffee5ff-84cf-4dfa-816b-ca1f8b763069-config-volume" (OuterVolumeSpecName: "config-volume") pod "2ffee5ff-84cf-4dfa-816b-ca1f8b763069" (UID: "2ffee5ff-84cf-4dfa-816b-ca1f8b763069"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:11:00 crc kubenswrapper[5109]: I0217 00:11:00.084121 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ffee5ff-84cf-4dfa-816b-ca1f8b763069-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2ffee5ff-84cf-4dfa-816b-ca1f8b763069" (UID: "2ffee5ff-84cf-4dfa-816b-ca1f8b763069"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:11:00 crc kubenswrapper[5109]: I0217 00:11:00.084342 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ffee5ff-84cf-4dfa-816b-ca1f8b763069-kube-api-access-v65w5" (OuterVolumeSpecName: "kube-api-access-v65w5") pod "2ffee5ff-84cf-4dfa-816b-ca1f8b763069" (UID: "2ffee5ff-84cf-4dfa-816b-ca1f8b763069"). InnerVolumeSpecName "kube-api-access-v65w5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:11:00 crc kubenswrapper[5109]: I0217 00:11:00.169730 5109 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2ffee5ff-84cf-4dfa-816b-ca1f8b763069-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:00 crc kubenswrapper[5109]: I0217 00:11:00.169764 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v65w5\" (UniqueName: \"kubernetes.io/projected/2ffee5ff-84cf-4dfa-816b-ca1f8b763069-kube-api-access-v65w5\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:00 crc kubenswrapper[5109]: I0217 00:11:00.169773 5109 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ffee5ff-84cf-4dfa-816b-ca1f8b763069-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:00 crc kubenswrapper[5109]: E0217 00:11:00.189286 5109 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcfae8bf_91d7_48d3_a978_1510fe282c92.slice/crio-504f6d52bccdfdf7124ab1ca39aad9207a6e2b1d13c90c68a4129a240f30faab.scope\": RecentStats: unable to find data in memory cache]" Feb 17 00:11:00 crc kubenswrapper[5109]: I0217 00:11:00.449937 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-5mdds" Feb 17 00:11:00 crc kubenswrapper[5109]: I0217 00:11:00.684704 5109 generic.go:358] "Generic (PLEG): container finished" podID="a0800be8-a032-4a80-b5f3-6f7b11ef439e" containerID="088f8c9bf8a34586d06a895018006d5101649d4cc6de4d628221ac002d3e0e13" exitCode=0 Feb 17 00:11:00 crc kubenswrapper[5109]: I0217 00:11:00.684876 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-247nb" event={"ID":"a0800be8-a032-4a80-b5f3-6f7b11ef439e","Type":"ContainerDied","Data":"088f8c9bf8a34586d06a895018006d5101649d4cc6de4d628221ac002d3e0e13"} Feb 17 00:11:00 crc kubenswrapper[5109]: I0217 00:11:00.691962 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-rcf5s" Feb 17 00:11:00 crc kubenswrapper[5109]: I0217 00:11:00.692906 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521440-rcf5s" event={"ID":"2ffee5ff-84cf-4dfa-816b-ca1f8b763069","Type":"ContainerDied","Data":"a46c747b4badb168efbe19a0be2f80a582cc7689b9ee0ccec9eed4167a5f6710"} Feb 17 00:11:00 crc kubenswrapper[5109]: I0217 00:11:00.692968 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a46c747b4badb168efbe19a0be2f80a582cc7689b9ee0ccec9eed4167a5f6710" Feb 17 00:11:00 crc kubenswrapper[5109]: I0217 00:11:00.810011 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" Feb 17 00:11:00 crc kubenswrapper[5109]: I0217 00:11:00.813931 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-rw5p4" Feb 17 00:11:01 crc kubenswrapper[5109]: I0217 00:11:01.227273 5109 patch_prober.go:28] interesting pod/downloads-747b44746d-5pqmv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 17 00:11:01 crc kubenswrapper[5109]: I0217 00:11:01.227333 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-5pqmv" podUID="abd1baa1-4b4c-459b-b487-5dd283fe0ad9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 17 00:11:02 crc kubenswrapper[5109]: I0217 00:11:02.080220 5109 ???:1] "http: TLS handshake error from 192.168.126.11:55274: no serving certificate available for the kubelet" Feb 17 00:11:02 crc kubenswrapper[5109]: I0217 00:11:02.312992 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 17 00:11:02 crc kubenswrapper[5109]: I0217 00:11:02.313047 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 17 00:11:02 crc kubenswrapper[5109]: I0217 00:11:02.313091 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 17 00:11:02 crc kubenswrapper[5109]: I0217 00:11:02.313139 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 17 00:11:02 crc kubenswrapper[5109]: I0217 00:11:02.315439 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Feb 17 00:11:02 crc kubenswrapper[5109]: I0217 00:11:02.315908 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Feb 17 00:11:02 crc kubenswrapper[5109]: I0217 00:11:02.319339 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Feb 17 00:11:02 crc kubenswrapper[5109]: I0217 00:11:02.324820 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 17 00:11:02 crc kubenswrapper[5109]: I0217 00:11:02.325940 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Feb 17 00:11:02 crc kubenswrapper[5109]: I0217 00:11:02.340774 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 17 00:11:02 crc kubenswrapper[5109]: I0217 00:11:02.341367 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 17 00:11:02 crc kubenswrapper[5109]: I0217 00:11:02.341428 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 17 00:11:02 crc kubenswrapper[5109]: I0217 00:11:02.379778 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 17 00:11:02 crc kubenswrapper[5109]: I0217 00:11:02.387930 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 17 00:11:02 crc kubenswrapper[5109]: I0217 00:11:02.413953 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 17 00:11:02 crc kubenswrapper[5109]: I0217 00:11:02.414204 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d9259cd-7490-4a4f-b09c-db6d25fadf0e-metrics-certs\") pod \"network-metrics-daemon-t9gkm\" (UID: \"1d9259cd-7490-4a4f-b09c-db6d25fadf0e\") " pod="openshift-multus/network-metrics-daemon-t9gkm" Feb 17 00:11:02 crc kubenswrapper[5109]: I0217 00:11:02.416067 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Feb 17 00:11:02 crc kubenswrapper[5109]: I0217 00:11:02.428701 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d9259cd-7490-4a4f-b09c-db6d25fadf0e-metrics-certs\") pod \"network-metrics-daemon-t9gkm\" (UID: \"1d9259cd-7490-4a4f-b09c-db6d25fadf0e\") " pod="openshift-multus/network-metrics-daemon-t9gkm" Feb 17 00:11:02 crc kubenswrapper[5109]: I0217 00:11:02.701878 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Feb 17 00:11:02 crc kubenswrapper[5109]: I0217 00:11:02.710353 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t9gkm" Feb 17 00:11:03 crc kubenswrapper[5109]: I0217 00:11:03.437148 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-8f27m" Feb 17 00:11:05 crc kubenswrapper[5109]: I0217 00:11:05.253863 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-xhskt" Feb 17 00:11:05 crc kubenswrapper[5109]: I0217 00:11:05.258753 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-xhskt" Feb 17 00:11:06 crc kubenswrapper[5109]: I0217 00:11:06.527940 5109 patch_prober.go:28] interesting pod/downloads-747b44746d-5pqmv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Feb 17 00:11:06 crc kubenswrapper[5109]: I0217 00:11:06.528568 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-5pqmv" podUID="abd1baa1-4b4c-459b-b487-5dd283fe0ad9" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Feb 17 00:11:09 crc kubenswrapper[5109]: I0217 00:11:09.394340 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-r8nwv"] Feb 17 00:11:09 crc kubenswrapper[5109]: I0217 00:11:09.394689 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-r8nwv" podUID="f19c89c8-8db7-461b-bf1f-61133b64a2da" containerName="controller-manager" containerID="cri-o://41942d56c832f013ddb8b9c5a1c0321a5e20d5bc80b20bb9c41b02bf585fbff7" gracePeriod=30 Feb 17 00:11:09 crc kubenswrapper[5109]: I0217 00:11:09.420467 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn"] Feb 17 00:11:09 crc kubenswrapper[5109]: I0217 00:11:09.421021 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn" podUID="cf4411dd-78f7-458e-b92b-e1670922138d" containerName="route-controller-manager" containerID="cri-o://3611fea1f040b89ed0cf69af4ec7d3876bb5b3a914c9fb1458a5b9e6901f615d" gracePeriod=30 Feb 17 00:11:09 crc kubenswrapper[5109]: E0217 00:11:09.447331 5109 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="56eba437445dd741d363ddbbafd1a629d5ff6bb99cbd659a0ce57cebe6b0aec6" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 17 00:11:09 crc kubenswrapper[5109]: E0217 00:11:09.448822 5109 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="56eba437445dd741d363ddbbafd1a629d5ff6bb99cbd659a0ce57cebe6b0aec6" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 17 00:11:09 crc kubenswrapper[5109]: E0217 00:11:09.450018 5109 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="56eba437445dd741d363ddbbafd1a629d5ff6bb99cbd659a0ce57cebe6b0aec6" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 17 00:11:09 crc kubenswrapper[5109]: E0217 00:11:09.450106 5109 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-mrr4k" podUID="b3e6e84e-201d-45cb-a34e-351fcc111c55" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 17 00:11:09 crc kubenswrapper[5109]: I0217 00:11:09.536496 5109 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-r8nwv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 17 00:11:09 crc kubenswrapper[5109]: I0217 00:11:09.536883 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-r8nwv" podUID="f19c89c8-8db7-461b-bf1f-61133b64a2da" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 17 00:11:10 crc kubenswrapper[5109]: I0217 00:11:10.044210 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:11:10 crc kubenswrapper[5109]: E0217 00:11:10.319887 5109 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcfae8bf_91d7_48d3_a978_1510fe282c92.slice/crio-504f6d52bccdfdf7124ab1ca39aad9207a6e2b1d13c90c68a4129a240f30faab.scope\": RecentStats: unable to find data in memory cache]" Feb 17 00:11:10 crc kubenswrapper[5109]: I0217 00:11:10.769961 5109 generic.go:358] "Generic (PLEG): container finished" podID="cf4411dd-78f7-458e-b92b-e1670922138d" containerID="3611fea1f040b89ed0cf69af4ec7d3876bb5b3a914c9fb1458a5b9e6901f615d" exitCode=0 Feb 17 00:11:10 crc kubenswrapper[5109]: I0217 00:11:10.770039 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn" event={"ID":"cf4411dd-78f7-458e-b92b-e1670922138d","Type":"ContainerDied","Data":"3611fea1f040b89ed0cf69af4ec7d3876bb5b3a914c9fb1458a5b9e6901f615d"} Feb 17 00:11:10 crc kubenswrapper[5109]: I0217 00:11:10.779966 5109 generic.go:358] "Generic (PLEG): container finished" podID="f19c89c8-8db7-461b-bf1f-61133b64a2da" containerID="41942d56c832f013ddb8b9c5a1c0321a5e20d5bc80b20bb9c41b02bf585fbff7" exitCode=0 Feb 17 00:11:10 crc kubenswrapper[5109]: I0217 00:11:10.780057 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-r8nwv" event={"ID":"f19c89c8-8db7-461b-bf1f-61133b64a2da","Type":"ContainerDied","Data":"41942d56c832f013ddb8b9c5a1c0321a5e20d5bc80b20bb9c41b02bf585fbff7"} Feb 17 00:11:11 crc kubenswrapper[5109]: I0217 00:11:11.233892 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-5pqmv" Feb 17 00:11:12 crc kubenswrapper[5109]: I0217 00:11:12.338629 5109 ???:1] "http: TLS handshake error from 192.168.126.11:54064: no serving certificate available for the kubelet" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.749215 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-r8nwv" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.753230 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.779906 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-79f769b86f-dknhd"] Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.780504 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cf4411dd-78f7-458e-b92b-e1670922138d" containerName="route-controller-manager" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.780522 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf4411dd-78f7-458e-b92b-e1670922138d" containerName="route-controller-manager" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.780541 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2ffee5ff-84cf-4dfa-816b-ca1f8b763069" containerName="collect-profiles" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.780548 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ffee5ff-84cf-4dfa-816b-ca1f8b763069" containerName="collect-profiles" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.780570 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f19c89c8-8db7-461b-bf1f-61133b64a2da" containerName="controller-manager" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.780575 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="f19c89c8-8db7-461b-bf1f-61133b64a2da" containerName="controller-manager" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.780709 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="f19c89c8-8db7-461b-bf1f-61133b64a2da" containerName="controller-manager" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.780726 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="cf4411dd-78f7-458e-b92b-e1670922138d" containerName="route-controller-manager" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.780734 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="2ffee5ff-84cf-4dfa-816b-ca1f8b763069" containerName="collect-profiles" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.791003 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79f769b86f-dknhd" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.797272 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79f769b86f-dknhd"] Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.797688 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f19c89c8-8db7-461b-bf1f-61133b64a2da-tmp\") pod \"f19c89c8-8db7-461b-bf1f-61133b64a2da\" (UID: \"f19c89c8-8db7-461b-bf1f-61133b64a2da\") " Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.797722 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f19c89c8-8db7-461b-bf1f-61133b64a2da-proxy-ca-bundles\") pod \"f19c89c8-8db7-461b-bf1f-61133b64a2da\" (UID: \"f19c89c8-8db7-461b-bf1f-61133b64a2da\") " Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.797753 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thjpm\" (UniqueName: \"kubernetes.io/projected/cf4411dd-78f7-458e-b92b-e1670922138d-kube-api-access-thjpm\") pod \"cf4411dd-78f7-458e-b92b-e1670922138d\" (UID: \"cf4411dd-78f7-458e-b92b-e1670922138d\") " Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.797924 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f19c89c8-8db7-461b-bf1f-61133b64a2da-config\") pod \"f19c89c8-8db7-461b-bf1f-61133b64a2da\" (UID: \"f19c89c8-8db7-461b-bf1f-61133b64a2da\") " Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.797956 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f19c89c8-8db7-461b-bf1f-61133b64a2da-serving-cert\") pod \"f19c89c8-8db7-461b-bf1f-61133b64a2da\" (UID: \"f19c89c8-8db7-461b-bf1f-61133b64a2da\") " Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.797994 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f19c89c8-8db7-461b-bf1f-61133b64a2da-client-ca\") pod \"f19c89c8-8db7-461b-bf1f-61133b64a2da\" (UID: \"f19c89c8-8db7-461b-bf1f-61133b64a2da\") " Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.798015 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cf4411dd-78f7-458e-b92b-e1670922138d-tmp\") pod \"cf4411dd-78f7-458e-b92b-e1670922138d\" (UID: \"cf4411dd-78f7-458e-b92b-e1670922138d\") " Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.798045 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v7d4\" (UniqueName: \"kubernetes.io/projected/f19c89c8-8db7-461b-bf1f-61133b64a2da-kube-api-access-7v7d4\") pod \"f19c89c8-8db7-461b-bf1f-61133b64a2da\" (UID: \"f19c89c8-8db7-461b-bf1f-61133b64a2da\") " Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.798070 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf4411dd-78f7-458e-b92b-e1670922138d-serving-cert\") pod \"cf4411dd-78f7-458e-b92b-e1670922138d\" (UID: \"cf4411dd-78f7-458e-b92b-e1670922138d\") " Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.798085 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf4411dd-78f7-458e-b92b-e1670922138d-config\") pod \"cf4411dd-78f7-458e-b92b-e1670922138d\" (UID: \"cf4411dd-78f7-458e-b92b-e1670922138d\") " Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.798148 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf4411dd-78f7-458e-b92b-e1670922138d-client-ca\") pod \"cf4411dd-78f7-458e-b92b-e1670922138d\" (UID: \"cf4411dd-78f7-458e-b92b-e1670922138d\") " Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.801046 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf4411dd-78f7-458e-b92b-e1670922138d-client-ca" (OuterVolumeSpecName: "client-ca") pod "cf4411dd-78f7-458e-b92b-e1670922138d" (UID: "cf4411dd-78f7-458e-b92b-e1670922138d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.801667 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f19c89c8-8db7-461b-bf1f-61133b64a2da-tmp" (OuterVolumeSpecName: "tmp") pod "f19c89c8-8db7-461b-bf1f-61133b64a2da" (UID: "f19c89c8-8db7-461b-bf1f-61133b64a2da"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.802193 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f19c89c8-8db7-461b-bf1f-61133b64a2da-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f19c89c8-8db7-461b-bf1f-61133b64a2da" (UID: "f19c89c8-8db7-461b-bf1f-61133b64a2da"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.802632 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk"] Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.803390 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf4411dd-78f7-458e-b92b-e1670922138d-config" (OuterVolumeSpecName: "config") pod "cf4411dd-78f7-458e-b92b-e1670922138d" (UID: "cf4411dd-78f7-458e-b92b-e1670922138d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.806445 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f19c89c8-8db7-461b-bf1f-61133b64a2da-client-ca" (OuterVolumeSpecName: "client-ca") pod "f19c89c8-8db7-461b-bf1f-61133b64a2da" (UID: "f19c89c8-8db7-461b-bf1f-61133b64a2da"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.806936 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf4411dd-78f7-458e-b92b-e1670922138d-tmp" (OuterVolumeSpecName: "tmp") pod "cf4411dd-78f7-458e-b92b-e1670922138d" (UID: "cf4411dd-78f7-458e-b92b-e1670922138d"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.807430 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f19c89c8-8db7-461b-bf1f-61133b64a2da-config" (OuterVolumeSpecName: "config") pod "f19c89c8-8db7-461b-bf1f-61133b64a2da" (UID: "f19c89c8-8db7-461b-bf1f-61133b64a2da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.815343 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf4411dd-78f7-458e-b92b-e1670922138d-kube-api-access-thjpm" (OuterVolumeSpecName: "kube-api-access-thjpm") pod "cf4411dd-78f7-458e-b92b-e1670922138d" (UID: "cf4411dd-78f7-458e-b92b-e1670922138d"). InnerVolumeSpecName "kube-api-access-thjpm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.816350 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf4411dd-78f7-458e-b92b-e1670922138d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cf4411dd-78f7-458e-b92b-e1670922138d" (UID: "cf4411dd-78f7-458e-b92b-e1670922138d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.817192 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f19c89c8-8db7-461b-bf1f-61133b64a2da-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f19c89c8-8db7-461b-bf1f-61133b64a2da" (UID: "f19c89c8-8db7-461b-bf1f-61133b64a2da"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.826337 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk"] Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.826484 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.826885 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f19c89c8-8db7-461b-bf1f-61133b64a2da-kube-api-access-7v7d4" (OuterVolumeSpecName: "kube-api-access-7v7d4") pod "f19c89c8-8db7-461b-bf1f-61133b64a2da" (UID: "f19c89c8-8db7-461b-bf1f-61133b64a2da"). InnerVolumeSpecName "kube-api-access-7v7d4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.841232 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.841319 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn" event={"ID":"cf4411dd-78f7-458e-b92b-e1670922138d","Type":"ContainerDied","Data":"4d61dc404432c1e8946db81896045998265f4c86227604ea33706d4137e5abad"} Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.841377 5109 scope.go:117] "RemoveContainer" containerID="3611fea1f040b89ed0cf69af4ec7d3876bb5b3a914c9fb1458a5b9e6901f615d" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.849341 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-r8nwv" event={"ID":"f19c89c8-8db7-461b-bf1f-61133b64a2da","Type":"ContainerDied","Data":"bf7dc40e15b2b66840f06f571e80c4a0310e6ab5a37b7f5f4d56a64589e29558"} Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.849575 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-r8nwv" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.894880 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-r8nwv"] Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.895215 5109 scope.go:117] "RemoveContainer" containerID="41942d56c832f013ddb8b9c5a1c0321a5e20d5bc80b20bb9c41b02bf585fbff7" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.898781 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-r8nwv"] Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.903681 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn"] Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.899616 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4c54232-fc8a-4ebb-9adb-69cc3b577b64-config\") pod \"route-controller-manager-85c68776d9-c98pk\" (UID: \"d4c54232-fc8a-4ebb-9adb-69cc3b577b64\") " pod="openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.903944 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70c5f413-d875-42e1-bf81-1b616b6148f0-proxy-ca-bundles\") pod \"controller-manager-79f769b86f-dknhd\" (UID: \"70c5f413-d875-42e1-bf81-1b616b6148f0\") " pod="openshift-controller-manager/controller-manager-79f769b86f-dknhd" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.904023 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4c54232-fc8a-4ebb-9adb-69cc3b577b64-serving-cert\") pod \"route-controller-manager-85c68776d9-c98pk\" (UID: \"d4c54232-fc8a-4ebb-9adb-69cc3b577b64\") " pod="openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.904165 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70c5f413-d875-42e1-bf81-1b616b6148f0-serving-cert\") pod \"controller-manager-79f769b86f-dknhd\" (UID: \"70c5f413-d875-42e1-bf81-1b616b6148f0\") " pod="openshift-controller-manager/controller-manager-79f769b86f-dknhd" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.904294 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4c54232-fc8a-4ebb-9adb-69cc3b577b64-client-ca\") pod \"route-controller-manager-85c68776d9-c98pk\" (UID: \"d4c54232-fc8a-4ebb-9adb-69cc3b577b64\") " pod="openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.904397 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncq8s\" (UniqueName: \"kubernetes.io/projected/70c5f413-d875-42e1-bf81-1b616b6148f0-kube-api-access-ncq8s\") pod \"controller-manager-79f769b86f-dknhd\" (UID: \"70c5f413-d875-42e1-bf81-1b616b6148f0\") " pod="openshift-controller-manager/controller-manager-79f769b86f-dknhd" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.904530 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5jbs\" (UniqueName: \"kubernetes.io/projected/d4c54232-fc8a-4ebb-9adb-69cc3b577b64-kube-api-access-b5jbs\") pod \"route-controller-manager-85c68776d9-c98pk\" (UID: \"d4c54232-fc8a-4ebb-9adb-69cc3b577b64\") " pod="openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.904629 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70c5f413-d875-42e1-bf81-1b616b6148f0-config\") pod \"controller-manager-79f769b86f-dknhd\" (UID: \"70c5f413-d875-42e1-bf81-1b616b6148f0\") " pod="openshift-controller-manager/controller-manager-79f769b86f-dknhd" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.904780 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70c5f413-d875-42e1-bf81-1b616b6148f0-client-ca\") pod \"controller-manager-79f769b86f-dknhd\" (UID: \"70c5f413-d875-42e1-bf81-1b616b6148f0\") " pod="openshift-controller-manager/controller-manager-79f769b86f-dknhd" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.904865 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d4c54232-fc8a-4ebb-9adb-69cc3b577b64-tmp\") pod \"route-controller-manager-85c68776d9-c98pk\" (UID: \"d4c54232-fc8a-4ebb-9adb-69cc3b577b64\") " pod="openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.905015 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/70c5f413-d875-42e1-bf81-1b616b6148f0-tmp\") pod \"controller-manager-79f769b86f-dknhd\" (UID: \"70c5f413-d875-42e1-bf81-1b616b6148f0\") " pod="openshift-controller-manager/controller-manager-79f769b86f-dknhd" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.905263 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f19c89c8-8db7-461b-bf1f-61133b64a2da-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.905635 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f19c89c8-8db7-461b-bf1f-61133b64a2da-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.905701 5109 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f19c89c8-8db7-461b-bf1f-61133b64a2da-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.905764 5109 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cf4411dd-78f7-458e-b92b-e1670922138d-tmp\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.905822 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7v7d4\" (UniqueName: \"kubernetes.io/projected/f19c89c8-8db7-461b-bf1f-61133b64a2da-kube-api-access-7v7d4\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.905882 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-shrhn"] Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.905892 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf4411dd-78f7-458e-b92b-e1670922138d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.905966 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf4411dd-78f7-458e-b92b-e1670922138d-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.905979 5109 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf4411dd-78f7-458e-b92b-e1670922138d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.905989 5109 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f19c89c8-8db7-461b-bf1f-61133b64a2da-tmp\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.905999 5109 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f19c89c8-8db7-461b-bf1f-61133b64a2da-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:13 crc kubenswrapper[5109]: I0217 00:11:13.906009 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-thjpm\" (UniqueName: \"kubernetes.io/projected/cf4411dd-78f7-458e-b92b-e1670922138d-kube-api-access-thjpm\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.008214 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70c5f413-d875-42e1-bf81-1b616b6148f0-client-ca\") pod \"controller-manager-79f769b86f-dknhd\" (UID: \"70c5f413-d875-42e1-bf81-1b616b6148f0\") " pod="openshift-controller-manager/controller-manager-79f769b86f-dknhd" Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.008258 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d4c54232-fc8a-4ebb-9adb-69cc3b577b64-tmp\") pod \"route-controller-manager-85c68776d9-c98pk\" (UID: \"d4c54232-fc8a-4ebb-9adb-69cc3b577b64\") " pod="openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk" Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.008301 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/70c5f413-d875-42e1-bf81-1b616b6148f0-tmp\") pod \"controller-manager-79f769b86f-dknhd\" (UID: \"70c5f413-d875-42e1-bf81-1b616b6148f0\") " pod="openshift-controller-manager/controller-manager-79f769b86f-dknhd" Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.008334 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4c54232-fc8a-4ebb-9adb-69cc3b577b64-config\") pod \"route-controller-manager-85c68776d9-c98pk\" (UID: \"d4c54232-fc8a-4ebb-9adb-69cc3b577b64\") " pod="openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk" Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.008365 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70c5f413-d875-42e1-bf81-1b616b6148f0-proxy-ca-bundles\") pod \"controller-manager-79f769b86f-dknhd\" (UID: \"70c5f413-d875-42e1-bf81-1b616b6148f0\") " pod="openshift-controller-manager/controller-manager-79f769b86f-dknhd" Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.008382 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4c54232-fc8a-4ebb-9adb-69cc3b577b64-serving-cert\") pod \"route-controller-manager-85c68776d9-c98pk\" (UID: \"d4c54232-fc8a-4ebb-9adb-69cc3b577b64\") " pod="openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk" Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.008407 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70c5f413-d875-42e1-bf81-1b616b6148f0-serving-cert\") pod \"controller-manager-79f769b86f-dknhd\" (UID: \"70c5f413-d875-42e1-bf81-1b616b6148f0\") " pod="openshift-controller-manager/controller-manager-79f769b86f-dknhd" Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.008425 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4c54232-fc8a-4ebb-9adb-69cc3b577b64-client-ca\") pod \"route-controller-manager-85c68776d9-c98pk\" (UID: \"d4c54232-fc8a-4ebb-9adb-69cc3b577b64\") " pod="openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk" Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.008442 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ncq8s\" (UniqueName: \"kubernetes.io/projected/70c5f413-d875-42e1-bf81-1b616b6148f0-kube-api-access-ncq8s\") pod \"controller-manager-79f769b86f-dknhd\" (UID: \"70c5f413-d875-42e1-bf81-1b616b6148f0\") " pod="openshift-controller-manager/controller-manager-79f769b86f-dknhd" Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.008464 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b5jbs\" (UniqueName: \"kubernetes.io/projected/d4c54232-fc8a-4ebb-9adb-69cc3b577b64-kube-api-access-b5jbs\") pod \"route-controller-manager-85c68776d9-c98pk\" (UID: \"d4c54232-fc8a-4ebb-9adb-69cc3b577b64\") " pod="openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk" Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.008479 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70c5f413-d875-42e1-bf81-1b616b6148f0-config\") pod \"controller-manager-79f769b86f-dknhd\" (UID: \"70c5f413-d875-42e1-bf81-1b616b6148f0\") " pod="openshift-controller-manager/controller-manager-79f769b86f-dknhd" Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.011101 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70c5f413-d875-42e1-bf81-1b616b6148f0-proxy-ca-bundles\") pod \"controller-manager-79f769b86f-dknhd\" (UID: \"70c5f413-d875-42e1-bf81-1b616b6148f0\") " pod="openshift-controller-manager/controller-manager-79f769b86f-dknhd" Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.013867 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70c5f413-d875-42e1-bf81-1b616b6148f0-client-ca\") pod \"controller-manager-79f769b86f-dknhd\" (UID: \"70c5f413-d875-42e1-bf81-1b616b6148f0\") " pod="openshift-controller-manager/controller-manager-79f769b86f-dknhd" Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.014166 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d4c54232-fc8a-4ebb-9adb-69cc3b577b64-tmp\") pod \"route-controller-manager-85c68776d9-c98pk\" (UID: \"d4c54232-fc8a-4ebb-9adb-69cc3b577b64\") " pod="openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk" Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.014393 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/70c5f413-d875-42e1-bf81-1b616b6148f0-tmp\") pod \"controller-manager-79f769b86f-dknhd\" (UID: \"70c5f413-d875-42e1-bf81-1b616b6148f0\") " pod="openshift-controller-manager/controller-manager-79f769b86f-dknhd" Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.015974 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4c54232-fc8a-4ebb-9adb-69cc3b577b64-config\") pod \"route-controller-manager-85c68776d9-c98pk\" (UID: \"d4c54232-fc8a-4ebb-9adb-69cc3b577b64\") " pod="openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk" Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.016678 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70c5f413-d875-42e1-bf81-1b616b6148f0-config\") pod \"controller-manager-79f769b86f-dknhd\" (UID: \"70c5f413-d875-42e1-bf81-1b616b6148f0\") " pod="openshift-controller-manager/controller-manager-79f769b86f-dknhd" Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.016938 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4c54232-fc8a-4ebb-9adb-69cc3b577b64-client-ca\") pod \"route-controller-manager-85c68776d9-c98pk\" (UID: \"d4c54232-fc8a-4ebb-9adb-69cc3b577b64\") " pod="openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk" Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.022087 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70c5f413-d875-42e1-bf81-1b616b6148f0-serving-cert\") pod \"controller-manager-79f769b86f-dknhd\" (UID: \"70c5f413-d875-42e1-bf81-1b616b6148f0\") " pod="openshift-controller-manager/controller-manager-79f769b86f-dknhd" Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.033357 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4c54232-fc8a-4ebb-9adb-69cc3b577b64-serving-cert\") pod \"route-controller-manager-85c68776d9-c98pk\" (UID: \"d4c54232-fc8a-4ebb-9adb-69cc3b577b64\") " pod="openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk" Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.041120 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5jbs\" (UniqueName: \"kubernetes.io/projected/d4c54232-fc8a-4ebb-9adb-69cc3b577b64-kube-api-access-b5jbs\") pod \"route-controller-manager-85c68776d9-c98pk\" (UID: \"d4c54232-fc8a-4ebb-9adb-69cc3b577b64\") " pod="openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk" Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.043427 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncq8s\" (UniqueName: \"kubernetes.io/projected/70c5f413-d875-42e1-bf81-1b616b6148f0-kube-api-access-ncq8s\") pod \"controller-manager-79f769b86f-dknhd\" (UID: \"70c5f413-d875-42e1-bf81-1b616b6148f0\") " pod="openshift-controller-manager/controller-manager-79f769b86f-dknhd" Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.116811 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79f769b86f-dknhd" Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.156572 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk" Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.228478 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-t9gkm"] Feb 17 00:11:14 crc kubenswrapper[5109]: W0217 00:11:14.338137 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf863fff9_286a_45fa_b8f0_8a86994b8440.slice/crio-e6db397e7304d8e3c30a345f770e92ef82129752f53c9de8ab3f3872803253ad WatchSource:0}: Error finding container e6db397e7304d8e3c30a345f770e92ef82129752f53c9de8ab3f3872803253ad: Status 404 returned error can't find the container with id e6db397e7304d8e3c30a345f770e92ef82129752f53c9de8ab3f3872803253ad Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.354082 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79f769b86f-dknhd"] Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.645912 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk"] Feb 17 00:11:14 crc kubenswrapper[5109]: W0217 00:11:14.653857 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4c54232_fc8a_4ebb_9adb_69cc3b577b64.slice/crio-d757926f0eea58558d31b62dd28398056ebf85f956fd743b8dd0a83ee97337c4 WatchSource:0}: Error finding container d757926f0eea58558d31b62dd28398056ebf85f956fd743b8dd0a83ee97337c4: Status 404 returned error can't find the container with id d757926f0eea58558d31b62dd28398056ebf85f956fd743b8dd0a83ee97337c4 Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.857416 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-247nb" event={"ID":"a0800be8-a032-4a80-b5f3-6f7b11ef439e","Type":"ContainerStarted","Data":"85f61c78176c87bf8b277ad31775ac15969f29e54ad2445165032e593fe5c083"} Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.860112 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"104a41b2002e165b05952d570e4eda1fa64ce4c7852135ca10165b807db63e6a"} Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.860158 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"e6db397e7304d8e3c30a345f770e92ef82129752f53c9de8ab3f3872803253ad"} Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.864511 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-trn6m" event={"ID":"387ec0da-dcb1-4001-8439-5793c9384015","Type":"ContainerStarted","Data":"9ee6419b3b1a54748b4b4aec10e02a078b9c420db9572b04e08eeac98f9365a8"} Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.868076 5109 generic.go:358] "Generic (PLEG): container finished" podID="7c7999b4-952b-46c2-8381-459a7524cd88" containerID="37b07cc64d3340adf96ec4eec842f13341698456da840bb40742cba1099fea77" exitCode=0 Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.868126 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lh8gv" event={"ID":"7c7999b4-952b-46c2-8381-459a7524cd88","Type":"ContainerDied","Data":"37b07cc64d3340adf96ec4eec842f13341698456da840bb40742cba1099fea77"} Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.872196 5109 generic.go:358] "Generic (PLEG): container finished" podID="dad5932b-d8a0-4ca7-ad78-ae817fbc3be6" containerID="73971a2c19f7d87df4314ccc3141d6f2cbd98e4d070a7504189441ca4d6ccd45" exitCode=0 Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.872272 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zf6ds" event={"ID":"dad5932b-d8a0-4ca7-ad78-ae817fbc3be6","Type":"ContainerDied","Data":"73971a2c19f7d87df4314ccc3141d6f2cbd98e4d070a7504189441ca4d6ccd45"} Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.876658 5109 generic.go:358] "Generic (PLEG): container finished" podID="fb9e5613-b1ca-483f-8efe-7c150933934b" containerID="1289bb082ddb806a571ae4df1a32420472bc43e605465c13072a140e07842e61" exitCode=0 Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.876756 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5p9cp" event={"ID":"fb9e5613-b1ca-483f-8efe-7c150933934b","Type":"ContainerDied","Data":"1289bb082ddb806a571ae4df1a32420472bc43e605465c13072a140e07842e61"} Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.881231 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79f769b86f-dknhd" event={"ID":"70c5f413-d875-42e1-bf81-1b616b6148f0","Type":"ContainerStarted","Data":"06e275a6abb80454e6984a383e736a5da8125f8c077ccbe929424e7631c69c3d"} Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.881289 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79f769b86f-dknhd" event={"ID":"70c5f413-d875-42e1-bf81-1b616b6148f0","Type":"ContainerStarted","Data":"5a0891ebc22c5141ca976870c56323b58559cd2919b21826f0800d3a6f508dc1"} Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.882366 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-79f769b86f-dknhd" Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.886816 5109 generic.go:358] "Generic (PLEG): container finished" podID="7386e430-c5f0-467b-9375-4eab8c181f1b" containerID="1144a6bca5b9ab59e4de438bf4997684bb6e7cbebfeff793b4fbc2e581a0f042" exitCode=0 Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.887032 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78v77" event={"ID":"7386e430-c5f0-467b-9375-4eab8c181f1b","Type":"ContainerDied","Data":"1144a6bca5b9ab59e4de438bf4997684bb6e7cbebfeff793b4fbc2e581a0f042"} Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.890130 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-t9gkm" event={"ID":"1d9259cd-7490-4a4f-b09c-db6d25fadf0e","Type":"ContainerStarted","Data":"510e95ba76b59fb23a96ed2c5aab3a2ac8d2ec86b91a2358e054af24ebc84ed0"} Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.890523 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-t9gkm" event={"ID":"1d9259cd-7490-4a4f-b09c-db6d25fadf0e","Type":"ContainerStarted","Data":"7125833f30178ff7df0450e79f26cde1a015e2890818755299a344d253049323"} Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.903069 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"e1b2ea16bc2c6569658f10c8fec63f5edec096f24108ed215367b140ce45d844"} Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.903132 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"c61534b54daf6a9508a85e0f270b9d264293c27cfdf9ec7820a3cd0bafe1e658"} Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.904944 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk" event={"ID":"d4c54232-fc8a-4ebb-9adb-69cc3b577b64","Type":"ContainerStarted","Data":"d757926f0eea58558d31b62dd28398056ebf85f956fd743b8dd0a83ee97337c4"} Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.907716 5109 generic.go:358] "Generic (PLEG): container finished" podID="e9b99043-6adb-499d-bcec-e0003af60fed" containerID="04b335a082b4ffe274a23ce35b678e48f661c86f726964240fa3431ec2cdbb89" exitCode=0 Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.907807 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4s9k" event={"ID":"e9b99043-6adb-499d-bcec-e0003af60fed","Type":"ContainerDied","Data":"04b335a082b4ffe274a23ce35b678e48f661c86f726964240fa3431ec2cdbb89"} Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.910306 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"e490326a19cdb9788743fc02536c4491bdfef3d7692ef069bd8098df320e026f"} Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.910352 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"075718fc13a494247477a58ac5d16676f025aa5d252b325a300b921e6a827a8f"} Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.910966 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.919011 5109 generic.go:358] "Generic (PLEG): container finished" podID="74a26206-1199-4cf4-912a-fa5e03a96713" containerID="a994bf7e0ce51fb7df80628053f8c9846ccc51921f6c345f2344967e8a1df789" exitCode=0 Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.919142 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjl8l" event={"ID":"74a26206-1199-4cf4-912a-fa5e03a96713","Type":"ContainerDied","Data":"a994bf7e0ce51fb7df80628053f8c9846ccc51921f6c345f2344967e8a1df789"} Feb 17 00:11:14 crc kubenswrapper[5109]: I0217 00:11:14.996994 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-79f769b86f-dknhd" podStartSLOduration=5.9969731710000005 podStartE2EDuration="5.996973171s" podCreationTimestamp="2026-02-17 00:11:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:11:14.995359458 +0000 UTC m=+146.326914226" watchObservedRunningTime="2026-02-17 00:11:14.996973171 +0000 UTC m=+146.328527929" Feb 17 00:11:15 crc kubenswrapper[5109]: I0217 00:11:15.241822 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-79f769b86f-dknhd" Feb 17 00:11:15 crc kubenswrapper[5109]: I0217 00:11:15.481276 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf4411dd-78f7-458e-b92b-e1670922138d" path="/var/lib/kubelet/pods/cf4411dd-78f7-458e-b92b-e1670922138d/volumes" Feb 17 00:11:15 crc kubenswrapper[5109]: I0217 00:11:15.482900 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f19c89c8-8db7-461b-bf1f-61133b64a2da" path="/var/lib/kubelet/pods/f19c89c8-8db7-461b-bf1f-61133b64a2da/volumes" Feb 17 00:11:15 crc kubenswrapper[5109]: I0217 00:11:15.932128 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk" event={"ID":"d4c54232-fc8a-4ebb-9adb-69cc3b577b64","Type":"ContainerStarted","Data":"c9981b57d1b0c22988c02cfa5cbd4e4b80f710724645d856a0089491efe6a1cb"} Feb 17 00:11:15 crc kubenswrapper[5109]: I0217 00:11:15.936341 5109 generic.go:358] "Generic (PLEG): container finished" podID="a0800be8-a032-4a80-b5f3-6f7b11ef439e" containerID="85f61c78176c87bf8b277ad31775ac15969f29e54ad2445165032e593fe5c083" exitCode=0 Feb 17 00:11:15 crc kubenswrapper[5109]: I0217 00:11:15.936454 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-247nb" event={"ID":"a0800be8-a032-4a80-b5f3-6f7b11ef439e","Type":"ContainerDied","Data":"85f61c78176c87bf8b277ad31775ac15969f29e54ad2445165032e593fe5c083"} Feb 17 00:11:15 crc kubenswrapper[5109]: I0217 00:11:15.944102 5109 generic.go:358] "Generic (PLEG): container finished" podID="387ec0da-dcb1-4001-8439-5793c9384015" containerID="9ee6419b3b1a54748b4b4aec10e02a078b9c420db9572b04e08eeac98f9365a8" exitCode=0 Feb 17 00:11:15 crc kubenswrapper[5109]: I0217 00:11:15.944489 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-trn6m" event={"ID":"387ec0da-dcb1-4001-8439-5793c9384015","Type":"ContainerDied","Data":"9ee6419b3b1a54748b4b4aec10e02a078b9c420db9572b04e08eeac98f9365a8"} Feb 17 00:11:16 crc kubenswrapper[5109]: I0217 00:11:16.366500 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk" Feb 17 00:11:16 crc kubenswrapper[5109]: I0217 00:11:16.374363 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk" Feb 17 00:11:16 crc kubenswrapper[5109]: I0217 00:11:16.427934 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk" podStartSLOduration=7.427912225 podStartE2EDuration="7.427912225s" podCreationTimestamp="2026-02-17 00:11:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:11:16.424152106 +0000 UTC m=+147.755706884" watchObservedRunningTime="2026-02-17 00:11:16.427912225 +0000 UTC m=+147.759466983" Feb 17 00:11:16 crc kubenswrapper[5109]: I0217 00:11:16.962167 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lh8gv" event={"ID":"7c7999b4-952b-46c2-8381-459a7524cd88","Type":"ContainerStarted","Data":"0061b8df370c05165138c24c03c0d23f3f491fcffa391e42e9bc6449b46bf929"} Feb 17 00:11:16 crc kubenswrapper[5109]: I0217 00:11:16.968325 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zf6ds" event={"ID":"dad5932b-d8a0-4ca7-ad78-ae817fbc3be6","Type":"ContainerStarted","Data":"21bfab5aa96f110c8680754950dcf86412d475f5ba1d51b2d5ebd2d2f1036086"} Feb 17 00:11:16 crc kubenswrapper[5109]: I0217 00:11:16.986015 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78v77" event={"ID":"7386e430-c5f0-467b-9375-4eab8c181f1b","Type":"ContainerStarted","Data":"01f7e9ac5bed012896e9dd099d0f77b71d55a43eca63bd6c998e45728b196504"} Feb 17 00:11:16 crc kubenswrapper[5109]: I0217 00:11:16.987437 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lh8gv" podStartSLOduration=4.6738943 podStartE2EDuration="21.987422666s" podCreationTimestamp="2026-02-17 00:10:55 +0000 UTC" firstStartedPulling="2026-02-17 00:10:56.504486174 +0000 UTC m=+127.836040932" lastFinishedPulling="2026-02-17 00:11:13.81801454 +0000 UTC m=+145.149569298" observedRunningTime="2026-02-17 00:11:16.985048524 +0000 UTC m=+148.316603282" watchObservedRunningTime="2026-02-17 00:11:16.987422666 +0000 UTC m=+148.318977424" Feb 17 00:11:16 crc kubenswrapper[5109]: I0217 00:11:16.993927 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4s9k" event={"ID":"e9b99043-6adb-499d-bcec-e0003af60fed","Type":"ContainerStarted","Data":"5c22d971aa9cb2f34a7c4eec583ac2c94773eecb722b129d1c8c8414b1efe504"} Feb 17 00:11:17 crc kubenswrapper[5109]: I0217 00:11:17.005255 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zf6ds" podStartSLOduration=5.778813792 podStartE2EDuration="23.005238905s" podCreationTimestamp="2026-02-17 00:10:54 +0000 UTC" firstStartedPulling="2026-02-17 00:10:56.506097476 +0000 UTC m=+127.837652234" lastFinishedPulling="2026-02-17 00:11:13.732522579 +0000 UTC m=+145.064077347" observedRunningTime="2026-02-17 00:11:17.004867386 +0000 UTC m=+148.336422144" watchObservedRunningTime="2026-02-17 00:11:17.005238905 +0000 UTC m=+148.336793663" Feb 17 00:11:17 crc kubenswrapper[5109]: I0217 00:11:17.011207 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjl8l" event={"ID":"74a26206-1199-4cf4-912a-fa5e03a96713","Type":"ContainerStarted","Data":"a38658af431554da184cd4044de0c8e9f3b84c8849fe0688ebbc0b70294027fd"} Feb 17 00:11:17 crc kubenswrapper[5109]: I0217 00:11:17.030424 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-78v77" podStartSLOduration=5.821762454 podStartE2EDuration="23.030409918s" podCreationTimestamp="2026-02-17 00:10:54 +0000 UTC" firstStartedPulling="2026-02-17 00:10:56.497167722 +0000 UTC m=+127.828722480" lastFinishedPulling="2026-02-17 00:11:13.705815186 +0000 UTC m=+145.037369944" observedRunningTime="2026-02-17 00:11:17.029318729 +0000 UTC m=+148.360873487" watchObservedRunningTime="2026-02-17 00:11:17.030409918 +0000 UTC m=+148.361964666" Feb 17 00:11:17 crc kubenswrapper[5109]: I0217 00:11:17.053429 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p4s9k" podStartSLOduration=5.90708144 podStartE2EDuration="21.053250479s" podCreationTimestamp="2026-02-17 00:10:56 +0000 UTC" firstStartedPulling="2026-02-17 00:10:58.604091457 +0000 UTC m=+129.935646215" lastFinishedPulling="2026-02-17 00:11:13.750260476 +0000 UTC m=+145.081815254" observedRunningTime="2026-02-17 00:11:17.050474046 +0000 UTC m=+148.382028804" watchObservedRunningTime="2026-02-17 00:11:17.053250479 +0000 UTC m=+148.384805237" Feb 17 00:11:17 crc kubenswrapper[5109]: I0217 00:11:17.074449 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vjl8l" podStartSLOduration=5.755930661 podStartE2EDuration="23.074435147s" podCreationTimestamp="2026-02-17 00:10:54 +0000 UTC" firstStartedPulling="2026-02-17 00:10:56.499455742 +0000 UTC m=+127.831010500" lastFinishedPulling="2026-02-17 00:11:13.817960228 +0000 UTC m=+145.149514986" observedRunningTime="2026-02-17 00:11:17.072215969 +0000 UTC m=+148.403770737" watchObservedRunningTime="2026-02-17 00:11:17.074435147 +0000 UTC m=+148.405989905" Feb 17 00:11:17 crc kubenswrapper[5109]: I0217 00:11:17.199146 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p4s9k" Feb 17 00:11:17 crc kubenswrapper[5109]: I0217 00:11:17.199208 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-p4s9k" Feb 17 00:11:18 crc kubenswrapper[5109]: I0217 00:11:18.113337 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-247nb" event={"ID":"a0800be8-a032-4a80-b5f3-6f7b11ef439e","Type":"ContainerStarted","Data":"97593faabb8a3c5a39484208418b09df4f6cbf38edf35f1d78c3fc243a9ed823"} Feb 17 00:11:18 crc kubenswrapper[5109]: I0217 00:11:18.123128 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-trn6m" event={"ID":"387ec0da-dcb1-4001-8439-5793c9384015","Type":"ContainerStarted","Data":"8e5b77d8030e3f96b8c9a75032ab8dec165d2fc5401ffbf73340136ec1e070f4"} Feb 17 00:11:18 crc kubenswrapper[5109]: I0217 00:11:18.125813 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5p9cp" event={"ID":"fb9e5613-b1ca-483f-8efe-7c150933934b","Type":"ContainerStarted","Data":"2d37accd3c89b9d8afa5dbd5af706a6868e7207d47541628fd1d4180d1155ad1"} Feb 17 00:11:18 crc kubenswrapper[5109]: I0217 00:11:18.130131 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-t9gkm" event={"ID":"1d9259cd-7490-4a4f-b09c-db6d25fadf0e","Type":"ContainerStarted","Data":"c6c9ce7f27faa0b67544331f26ca52514e9c46b0a6163ece19337f6a34424292"} Feb 17 00:11:18 crc kubenswrapper[5109]: I0217 00:11:18.136760 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-247nb" podStartSLOduration=6.995473941 podStartE2EDuration="20.136679285s" podCreationTimestamp="2026-02-17 00:10:58 +0000 UTC" firstStartedPulling="2026-02-17 00:11:00.685608297 +0000 UTC m=+132.017163055" lastFinishedPulling="2026-02-17 00:11:13.826813641 +0000 UTC m=+145.158368399" observedRunningTime="2026-02-17 00:11:18.132255089 +0000 UTC m=+149.463809857" watchObservedRunningTime="2026-02-17 00:11:18.136679285 +0000 UTC m=+149.468234043" Feb 17 00:11:18 crc kubenswrapper[5109]: I0217 00:11:18.149127 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-t9gkm" podStartSLOduration=128.149112813 podStartE2EDuration="2m8.149112813s" podCreationTimestamp="2026-02-17 00:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:11:18.146576416 +0000 UTC m=+149.478131174" watchObservedRunningTime="2026-02-17 00:11:18.149112813 +0000 UTC m=+149.480667571" Feb 17 00:11:18 crc kubenswrapper[5109]: I0217 00:11:18.163241 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5p9cp" podStartSLOduration=6.128141405 podStartE2EDuration="22.163227544s" podCreationTimestamp="2026-02-17 00:10:56 +0000 UTC" firstStartedPulling="2026-02-17 00:10:57.550403079 +0000 UTC m=+128.881957837" lastFinishedPulling="2026-02-17 00:11:13.585489218 +0000 UTC m=+144.917043976" observedRunningTime="2026-02-17 00:11:18.161396756 +0000 UTC m=+149.492951514" watchObservedRunningTime="2026-02-17 00:11:18.163227544 +0000 UTC m=+149.494782302" Feb 17 00:11:18 crc kubenswrapper[5109]: I0217 00:11:18.189614 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-trn6m" podStartSLOduration=7.056637244 podStartE2EDuration="21.189586148s" podCreationTimestamp="2026-02-17 00:10:57 +0000 UTC" firstStartedPulling="2026-02-17 00:10:59.649331835 +0000 UTC m=+130.980886593" lastFinishedPulling="2026-02-17 00:11:13.782280739 +0000 UTC m=+145.113835497" observedRunningTime="2026-02-17 00:11:18.185333316 +0000 UTC m=+149.516888074" watchObservedRunningTime="2026-02-17 00:11:18.189586148 +0000 UTC m=+149.521140906" Feb 17 00:11:18 crc kubenswrapper[5109]: I0217 00:11:18.278910 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-trn6m" Feb 17 00:11:18 crc kubenswrapper[5109]: I0217 00:11:18.279230 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-trn6m" Feb 17 00:11:18 crc kubenswrapper[5109]: I0217 00:11:18.469658 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-p4s9k" podUID="e9b99043-6adb-499d-bcec-e0003af60fed" containerName="registry-server" probeResult="failure" output=< Feb 17 00:11:18 crc kubenswrapper[5109]: timeout: failed to connect service ":50051" within 1s Feb 17 00:11:18 crc kubenswrapper[5109]: > Feb 17 00:11:18 crc kubenswrapper[5109]: I0217 00:11:18.938889 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-247nb" Feb 17 00:11:18 crc kubenswrapper[5109]: I0217 00:11:18.938936 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-247nb" Feb 17 00:11:19 crc kubenswrapper[5109]: I0217 00:11:19.323816 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-trn6m" podUID="387ec0da-dcb1-4001-8439-5793c9384015" containerName="registry-server" probeResult="failure" output=< Feb 17 00:11:19 crc kubenswrapper[5109]: timeout: failed to connect service ":50051" within 1s Feb 17 00:11:19 crc kubenswrapper[5109]: > Feb 17 00:11:19 crc kubenswrapper[5109]: E0217 00:11:19.445867 5109 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="56eba437445dd741d363ddbbafd1a629d5ff6bb99cbd659a0ce57cebe6b0aec6" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 17 00:11:19 crc kubenswrapper[5109]: E0217 00:11:19.447184 5109 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="56eba437445dd741d363ddbbafd1a629d5ff6bb99cbd659a0ce57cebe6b0aec6" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 17 00:11:19 crc kubenswrapper[5109]: E0217 00:11:19.448480 5109 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="56eba437445dd741d363ddbbafd1a629d5ff6bb99cbd659a0ce57cebe6b0aec6" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 17 00:11:19 crc kubenswrapper[5109]: E0217 00:11:19.448521 5109 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-mrr4k" podUID="b3e6e84e-201d-45cb-a34e-351fcc111c55" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 17 00:11:19 crc kubenswrapper[5109]: I0217 00:11:19.974232 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-247nb" podUID="a0800be8-a032-4a80-b5f3-6f7b11ef439e" containerName="registry-server" probeResult="failure" output=< Feb 17 00:11:19 crc kubenswrapper[5109]: timeout: failed to connect service ":50051" within 1s Feb 17 00:11:19 crc kubenswrapper[5109]: > Feb 17 00:11:20 crc kubenswrapper[5109]: E0217 00:11:20.470093 5109 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcfae8bf_91d7_48d3_a978_1510fe282c92.slice/crio-504f6d52bccdfdf7124ab1ca39aad9207a6e2b1d13c90c68a4129a240f30faab.scope\": RecentStats: unable to find data in memory cache]" Feb 17 00:11:20 crc kubenswrapper[5109]: I0217 00:11:20.701693 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:11:23 crc kubenswrapper[5109]: I0217 00:11:23.710862 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-mrr4k_b3e6e84e-201d-45cb-a34e-351fcc111c55/kube-multus-additional-cni-plugins/0.log" Feb 17 00:11:23 crc kubenswrapper[5109]: I0217 00:11:23.711559 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-mrr4k" Feb 17 00:11:23 crc kubenswrapper[5109]: I0217 00:11:23.772015 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b3e6e84e-201d-45cb-a34e-351fcc111c55-cni-sysctl-allowlist\") pod \"b3e6e84e-201d-45cb-a34e-351fcc111c55\" (UID: \"b3e6e84e-201d-45cb-a34e-351fcc111c55\") " Feb 17 00:11:23 crc kubenswrapper[5109]: I0217 00:11:23.772306 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b3e6e84e-201d-45cb-a34e-351fcc111c55-tuning-conf-dir\") pod \"b3e6e84e-201d-45cb-a34e-351fcc111c55\" (UID: \"b3e6e84e-201d-45cb-a34e-351fcc111c55\") " Feb 17 00:11:23 crc kubenswrapper[5109]: I0217 00:11:23.772378 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3e6e84e-201d-45cb-a34e-351fcc111c55-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "b3e6e84e-201d-45cb-a34e-351fcc111c55" (UID: "b3e6e84e-201d-45cb-a34e-351fcc111c55"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 17 00:11:23 crc kubenswrapper[5109]: I0217 00:11:23.772549 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9kwl\" (UniqueName: \"kubernetes.io/projected/b3e6e84e-201d-45cb-a34e-351fcc111c55-kube-api-access-t9kwl\") pod \"b3e6e84e-201d-45cb-a34e-351fcc111c55\" (UID: \"b3e6e84e-201d-45cb-a34e-351fcc111c55\") " Feb 17 00:11:23 crc kubenswrapper[5109]: I0217 00:11:23.772682 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b3e6e84e-201d-45cb-a34e-351fcc111c55-ready\") pod \"b3e6e84e-201d-45cb-a34e-351fcc111c55\" (UID: \"b3e6e84e-201d-45cb-a34e-351fcc111c55\") " Feb 17 00:11:23 crc kubenswrapper[5109]: I0217 00:11:23.772920 5109 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b3e6e84e-201d-45cb-a34e-351fcc111c55-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:23 crc kubenswrapper[5109]: I0217 00:11:23.772936 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3e6e84e-201d-45cb-a34e-351fcc111c55-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "b3e6e84e-201d-45cb-a34e-351fcc111c55" (UID: "b3e6e84e-201d-45cb-a34e-351fcc111c55"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:11:23 crc kubenswrapper[5109]: I0217 00:11:23.773087 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3e6e84e-201d-45cb-a34e-351fcc111c55-ready" (OuterVolumeSpecName: "ready") pod "b3e6e84e-201d-45cb-a34e-351fcc111c55" (UID: "b3e6e84e-201d-45cb-a34e-351fcc111c55"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:11:23 crc kubenswrapper[5109]: I0217 00:11:23.781246 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3e6e84e-201d-45cb-a34e-351fcc111c55-kube-api-access-t9kwl" (OuterVolumeSpecName: "kube-api-access-t9kwl") pod "b3e6e84e-201d-45cb-a34e-351fcc111c55" (UID: "b3e6e84e-201d-45cb-a34e-351fcc111c55"). InnerVolumeSpecName "kube-api-access-t9kwl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:11:23 crc kubenswrapper[5109]: I0217 00:11:23.874625 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t9kwl\" (UniqueName: \"kubernetes.io/projected/b3e6e84e-201d-45cb-a34e-351fcc111c55-kube-api-access-t9kwl\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:23 crc kubenswrapper[5109]: I0217 00:11:23.874672 5109 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b3e6e84e-201d-45cb-a34e-351fcc111c55-ready\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:23 crc kubenswrapper[5109]: I0217 00:11:23.874688 5109 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b3e6e84e-201d-45cb-a34e-351fcc111c55-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:24 crc kubenswrapper[5109]: I0217 00:11:24.252048 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-mrr4k_b3e6e84e-201d-45cb-a34e-351fcc111c55/kube-multus-additional-cni-plugins/0.log" Feb 17 00:11:24 crc kubenswrapper[5109]: I0217 00:11:24.252102 5109 generic.go:358] "Generic (PLEG): container finished" podID="b3e6e84e-201d-45cb-a34e-351fcc111c55" containerID="56eba437445dd741d363ddbbafd1a629d5ff6bb99cbd659a0ce57cebe6b0aec6" exitCode=137 Feb 17 00:11:24 crc kubenswrapper[5109]: I0217 00:11:24.252194 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-mrr4k" Feb 17 00:11:24 crc kubenswrapper[5109]: I0217 00:11:24.252251 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-mrr4k" event={"ID":"b3e6e84e-201d-45cb-a34e-351fcc111c55","Type":"ContainerDied","Data":"56eba437445dd741d363ddbbafd1a629d5ff6bb99cbd659a0ce57cebe6b0aec6"} Feb 17 00:11:24 crc kubenswrapper[5109]: I0217 00:11:24.252311 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-mrr4k" event={"ID":"b3e6e84e-201d-45cb-a34e-351fcc111c55","Type":"ContainerDied","Data":"85a0374b7e35458fdcac8e7d8152ba22b4bc690809319c1e206e841e6a6ba50e"} Feb 17 00:11:24 crc kubenswrapper[5109]: I0217 00:11:24.252334 5109 scope.go:117] "RemoveContainer" containerID="56eba437445dd741d363ddbbafd1a629d5ff6bb99cbd659a0ce57cebe6b0aec6" Feb 17 00:11:24 crc kubenswrapper[5109]: I0217 00:11:24.275331 5109 scope.go:117] "RemoveContainer" containerID="56eba437445dd741d363ddbbafd1a629d5ff6bb99cbd659a0ce57cebe6b0aec6" Feb 17 00:11:24 crc kubenswrapper[5109]: E0217 00:11:24.275752 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56eba437445dd741d363ddbbafd1a629d5ff6bb99cbd659a0ce57cebe6b0aec6\": container with ID starting with 56eba437445dd741d363ddbbafd1a629d5ff6bb99cbd659a0ce57cebe6b0aec6 not found: ID does not exist" containerID="56eba437445dd741d363ddbbafd1a629d5ff6bb99cbd659a0ce57cebe6b0aec6" Feb 17 00:11:24 crc kubenswrapper[5109]: I0217 00:11:24.275790 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56eba437445dd741d363ddbbafd1a629d5ff6bb99cbd659a0ce57cebe6b0aec6"} err="failed to get container status \"56eba437445dd741d363ddbbafd1a629d5ff6bb99cbd659a0ce57cebe6b0aec6\": rpc error: code = NotFound desc = could not find container \"56eba437445dd741d363ddbbafd1a629d5ff6bb99cbd659a0ce57cebe6b0aec6\": container with ID starting with 56eba437445dd741d363ddbbafd1a629d5ff6bb99cbd659a0ce57cebe6b0aec6 not found: ID does not exist" Feb 17 00:11:24 crc kubenswrapper[5109]: I0217 00:11:24.282081 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-mrr4k"] Feb 17 00:11:24 crc kubenswrapper[5109]: I0217 00:11:24.284492 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-mrr4k"] Feb 17 00:11:24 crc kubenswrapper[5109]: I0217 00:11:24.444060 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2tl49" Feb 17 00:11:24 crc kubenswrapper[5109]: I0217 00:11:24.793081 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-vjl8l" Feb 17 00:11:24 crc kubenswrapper[5109]: I0217 00:11:24.793348 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vjl8l" Feb 17 00:11:24 crc kubenswrapper[5109]: I0217 00:11:24.875019 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vjl8l" Feb 17 00:11:25 crc kubenswrapper[5109]: I0217 00:11:25.057379 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-78v77" Feb 17 00:11:25 crc kubenswrapper[5109]: I0217 00:11:25.057441 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-78v77" Feb 17 00:11:25 crc kubenswrapper[5109]: I0217 00:11:25.096191 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-78v77" Feb 17 00:11:25 crc kubenswrapper[5109]: I0217 00:11:25.246540 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zf6ds" Feb 17 00:11:25 crc kubenswrapper[5109]: I0217 00:11:25.246628 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-zf6ds" Feb 17 00:11:25 crc kubenswrapper[5109]: I0217 00:11:25.309368 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vjl8l" Feb 17 00:11:25 crc kubenswrapper[5109]: I0217 00:11:25.310408 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zf6ds" Feb 17 00:11:25 crc kubenswrapper[5109]: I0217 00:11:25.341383 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-78v77" Feb 17 00:11:25 crc kubenswrapper[5109]: I0217 00:11:25.364806 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zf6ds" Feb 17 00:11:25 crc kubenswrapper[5109]: I0217 00:11:25.456463 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lh8gv" Feb 17 00:11:25 crc kubenswrapper[5109]: I0217 00:11:25.456530 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-lh8gv" Feb 17 00:11:25 crc kubenswrapper[5109]: I0217 00:11:25.472678 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3e6e84e-201d-45cb-a34e-351fcc111c55" path="/var/lib/kubelet/pods/b3e6e84e-201d-45cb-a34e-351fcc111c55/volumes" Feb 17 00:11:25 crc kubenswrapper[5109]: I0217 00:11:25.499300 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lh8gv" Feb 17 00:11:26 crc kubenswrapper[5109]: I0217 00:11:26.324798 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lh8gv" Feb 17 00:11:26 crc kubenswrapper[5109]: I0217 00:11:26.792918 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5p9cp" Feb 17 00:11:26 crc kubenswrapper[5109]: I0217 00:11:26.792973 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-5p9cp" Feb 17 00:11:26 crc kubenswrapper[5109]: I0217 00:11:26.829988 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5p9cp" Feb 17 00:11:27 crc kubenswrapper[5109]: I0217 00:11:27.236321 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p4s9k" Feb 17 00:11:27 crc kubenswrapper[5109]: I0217 00:11:27.271880 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p4s9k" Feb 17 00:11:27 crc kubenswrapper[5109]: I0217 00:11:27.306295 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5p9cp" Feb 17 00:11:27 crc kubenswrapper[5109]: I0217 00:11:27.689258 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zf6ds"] Feb 17 00:11:27 crc kubenswrapper[5109]: I0217 00:11:27.689546 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zf6ds" podUID="dad5932b-d8a0-4ca7-ad78-ae817fbc3be6" containerName="registry-server" containerID="cri-o://21bfab5aa96f110c8680754950dcf86412d475f5ba1d51b2d5ebd2d2f1036086" gracePeriod=2 Feb 17 00:11:28 crc kubenswrapper[5109]: I0217 00:11:28.321552 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-trn6m" Feb 17 00:11:28 crc kubenswrapper[5109]: I0217 00:11:28.370105 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-trn6m" Feb 17 00:11:28 crc kubenswrapper[5109]: I0217 00:11:28.688447 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lh8gv"] Feb 17 00:11:28 crc kubenswrapper[5109]: I0217 00:11:28.688723 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lh8gv" podUID="7c7999b4-952b-46c2-8381-459a7524cd88" containerName="registry-server" containerID="cri-o://0061b8df370c05165138c24c03c0d23f3f491fcffa391e42e9bc6449b46bf929" gracePeriod=2 Feb 17 00:11:28 crc kubenswrapper[5109]: I0217 00:11:28.993767 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-247nb" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.040718 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-247nb" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.129373 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lh8gv" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.211619 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-79f769b86f-dknhd"] Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.211868 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-79f769b86f-dknhd" podUID="70c5f413-d875-42e1-bf81-1b616b6148f0" containerName="controller-manager" containerID="cri-o://06e275a6abb80454e6984a383e736a5da8125f8c077ccbe929424e7631c69c3d" gracePeriod=30 Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.243601 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zf6ds" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.243872 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk"] Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.244103 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk" podUID="d4c54232-fc8a-4ebb-9adb-69cc3b577b64" containerName="route-controller-manager" containerID="cri-o://c9981b57d1b0c22988c02cfa5cbd4e4b80f710724645d856a0089491efe6a1cb" gracePeriod=30 Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.246053 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c7999b4-952b-46c2-8381-459a7524cd88-utilities\") pod \"7c7999b4-952b-46c2-8381-459a7524cd88\" (UID: \"7c7999b4-952b-46c2-8381-459a7524cd88\") " Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.246197 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tjhg\" (UniqueName: \"kubernetes.io/projected/7c7999b4-952b-46c2-8381-459a7524cd88-kube-api-access-9tjhg\") pod \"7c7999b4-952b-46c2-8381-459a7524cd88\" (UID: \"7c7999b4-952b-46c2-8381-459a7524cd88\") " Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.246216 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c7999b4-952b-46c2-8381-459a7524cd88-catalog-content\") pod \"7c7999b4-952b-46c2-8381-459a7524cd88\" (UID: \"7c7999b4-952b-46c2-8381-459a7524cd88\") " Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.247747 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c7999b4-952b-46c2-8381-459a7524cd88-utilities" (OuterVolumeSpecName: "utilities") pod "7c7999b4-952b-46c2-8381-459a7524cd88" (UID: "7c7999b4-952b-46c2-8381-459a7524cd88"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.262568 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c7999b4-952b-46c2-8381-459a7524cd88-kube-api-access-9tjhg" (OuterVolumeSpecName: "kube-api-access-9tjhg") pod "7c7999b4-952b-46c2-8381-459a7524cd88" (UID: "7c7999b4-952b-46c2-8381-459a7524cd88"). InnerVolumeSpecName "kube-api-access-9tjhg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.306772 5109 generic.go:358] "Generic (PLEG): container finished" podID="7c7999b4-952b-46c2-8381-459a7524cd88" containerID="0061b8df370c05165138c24c03c0d23f3f491fcffa391e42e9bc6449b46bf929" exitCode=0 Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.307059 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lh8gv" event={"ID":"7c7999b4-952b-46c2-8381-459a7524cd88","Type":"ContainerDied","Data":"0061b8df370c05165138c24c03c0d23f3f491fcffa391e42e9bc6449b46bf929"} Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.307151 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lh8gv" event={"ID":"7c7999b4-952b-46c2-8381-459a7524cd88","Type":"ContainerDied","Data":"f09b4fa447f1b47a845f73b292607908ad48e53f1db30e68c49228c8d61db398"} Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.307224 5109 scope.go:117] "RemoveContainer" containerID="0061b8df370c05165138c24c03c0d23f3f491fcffa391e42e9bc6449b46bf929" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.307427 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lh8gv" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.337145 5109 generic.go:358] "Generic (PLEG): container finished" podID="dad5932b-d8a0-4ca7-ad78-ae817fbc3be6" containerID="21bfab5aa96f110c8680754950dcf86412d475f5ba1d51b2d5ebd2d2f1036086" exitCode=0 Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.337636 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zf6ds" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.337807 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zf6ds" event={"ID":"dad5932b-d8a0-4ca7-ad78-ae817fbc3be6","Type":"ContainerDied","Data":"21bfab5aa96f110c8680754950dcf86412d475f5ba1d51b2d5ebd2d2f1036086"} Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.337951 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zf6ds" event={"ID":"dad5932b-d8a0-4ca7-ad78-ae817fbc3be6","Type":"ContainerDied","Data":"e9d496cae2c2a977ee39225a5aeac7071fb0e40edd4c9d193f93fd9ac74c6dc7"} Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.347742 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad5932b-d8a0-4ca7-ad78-ae817fbc3be6-utilities\") pod \"dad5932b-d8a0-4ca7-ad78-ae817fbc3be6\" (UID: \"dad5932b-d8a0-4ca7-ad78-ae817fbc3be6\") " Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.347791 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad5932b-d8a0-4ca7-ad78-ae817fbc3be6-catalog-content\") pod \"dad5932b-d8a0-4ca7-ad78-ae817fbc3be6\" (UID: \"dad5932b-d8a0-4ca7-ad78-ae817fbc3be6\") " Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.347821 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6xkq\" (UniqueName: \"kubernetes.io/projected/dad5932b-d8a0-4ca7-ad78-ae817fbc3be6-kube-api-access-q6xkq\") pod \"dad5932b-d8a0-4ca7-ad78-ae817fbc3be6\" (UID: \"dad5932b-d8a0-4ca7-ad78-ae817fbc3be6\") " Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.348016 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9tjhg\" (UniqueName: \"kubernetes.io/projected/7c7999b4-952b-46c2-8381-459a7524cd88-kube-api-access-9tjhg\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.348027 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c7999b4-952b-46c2-8381-459a7524cd88-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.353638 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dad5932b-d8a0-4ca7-ad78-ae817fbc3be6-kube-api-access-q6xkq" (OuterVolumeSpecName: "kube-api-access-q6xkq") pod "dad5932b-d8a0-4ca7-ad78-ae817fbc3be6" (UID: "dad5932b-d8a0-4ca7-ad78-ae817fbc3be6"). InnerVolumeSpecName "kube-api-access-q6xkq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.355091 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dad5932b-d8a0-4ca7-ad78-ae817fbc3be6-utilities" (OuterVolumeSpecName: "utilities") pod "dad5932b-d8a0-4ca7-ad78-ae817fbc3be6" (UID: "dad5932b-d8a0-4ca7-ad78-ae817fbc3be6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.379668 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c7999b4-952b-46c2-8381-459a7524cd88-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c7999b4-952b-46c2-8381-459a7524cd88" (UID: "7c7999b4-952b-46c2-8381-459a7524cd88"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.394510 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dad5932b-d8a0-4ca7-ad78-ae817fbc3be6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dad5932b-d8a0-4ca7-ad78-ae817fbc3be6" (UID: "dad5932b-d8a0-4ca7-ad78-ae817fbc3be6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.403842 5109 scope.go:117] "RemoveContainer" containerID="37b07cc64d3340adf96ec4eec842f13341698456da840bb40742cba1099fea77" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.427666 5109 scope.go:117] "RemoveContainer" containerID="c50635ddaaa15696b411aa162bef7c398806fe504a883ea9fa24d753f9ada22c" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.446835 5109 scope.go:117] "RemoveContainer" containerID="0061b8df370c05165138c24c03c0d23f3f491fcffa391e42e9bc6449b46bf929" Feb 17 00:11:29 crc kubenswrapper[5109]: E0217 00:11:29.447818 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0061b8df370c05165138c24c03c0d23f3f491fcffa391e42e9bc6449b46bf929\": container with ID starting with 0061b8df370c05165138c24c03c0d23f3f491fcffa391e42e9bc6449b46bf929 not found: ID does not exist" containerID="0061b8df370c05165138c24c03c0d23f3f491fcffa391e42e9bc6449b46bf929" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.447851 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0061b8df370c05165138c24c03c0d23f3f491fcffa391e42e9bc6449b46bf929"} err="failed to get container status \"0061b8df370c05165138c24c03c0d23f3f491fcffa391e42e9bc6449b46bf929\": rpc error: code = NotFound desc = could not find container \"0061b8df370c05165138c24c03c0d23f3f491fcffa391e42e9bc6449b46bf929\": container with ID starting with 0061b8df370c05165138c24c03c0d23f3f491fcffa391e42e9bc6449b46bf929 not found: ID does not exist" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.447870 5109 scope.go:117] "RemoveContainer" containerID="37b07cc64d3340adf96ec4eec842f13341698456da840bb40742cba1099fea77" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.448664 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad5932b-d8a0-4ca7-ad78-ae817fbc3be6-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.448680 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad5932b-d8a0-4ca7-ad78-ae817fbc3be6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.448689 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q6xkq\" (UniqueName: \"kubernetes.io/projected/dad5932b-d8a0-4ca7-ad78-ae817fbc3be6-kube-api-access-q6xkq\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.448697 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c7999b4-952b-46c2-8381-459a7524cd88-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:29 crc kubenswrapper[5109]: E0217 00:11:29.461750 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37b07cc64d3340adf96ec4eec842f13341698456da840bb40742cba1099fea77\": container with ID starting with 37b07cc64d3340adf96ec4eec842f13341698456da840bb40742cba1099fea77 not found: ID does not exist" containerID="37b07cc64d3340adf96ec4eec842f13341698456da840bb40742cba1099fea77" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.462792 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37b07cc64d3340adf96ec4eec842f13341698456da840bb40742cba1099fea77"} err="failed to get container status \"37b07cc64d3340adf96ec4eec842f13341698456da840bb40742cba1099fea77\": rpc error: code = NotFound desc = could not find container \"37b07cc64d3340adf96ec4eec842f13341698456da840bb40742cba1099fea77\": container with ID starting with 37b07cc64d3340adf96ec4eec842f13341698456da840bb40742cba1099fea77 not found: ID does not exist" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.462892 5109 scope.go:117] "RemoveContainer" containerID="c50635ddaaa15696b411aa162bef7c398806fe504a883ea9fa24d753f9ada22c" Feb 17 00:11:29 crc kubenswrapper[5109]: E0217 00:11:29.463564 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c50635ddaaa15696b411aa162bef7c398806fe504a883ea9fa24d753f9ada22c\": container with ID starting with c50635ddaaa15696b411aa162bef7c398806fe504a883ea9fa24d753f9ada22c not found: ID does not exist" containerID="c50635ddaaa15696b411aa162bef7c398806fe504a883ea9fa24d753f9ada22c" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.463623 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c50635ddaaa15696b411aa162bef7c398806fe504a883ea9fa24d753f9ada22c"} err="failed to get container status \"c50635ddaaa15696b411aa162bef7c398806fe504a883ea9fa24d753f9ada22c\": rpc error: code = NotFound desc = could not find container \"c50635ddaaa15696b411aa162bef7c398806fe504a883ea9fa24d753f9ada22c\": container with ID starting with c50635ddaaa15696b411aa162bef7c398806fe504a883ea9fa24d753f9ada22c not found: ID does not exist" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.463637 5109 scope.go:117] "RemoveContainer" containerID="21bfab5aa96f110c8680754950dcf86412d475f5ba1d51b2d5ebd2d2f1036086" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.489442 5109 scope.go:117] "RemoveContainer" containerID="73971a2c19f7d87df4314ccc3141d6f2cbd98e4d070a7504189441ca4d6ccd45" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.526344 5109 scope.go:117] "RemoveContainer" containerID="4db0df57ab76b8b0d43b8ad37679ffb2993b4596dd7e6f053dfb6f0a29e3f7c7" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.545632 5109 scope.go:117] "RemoveContainer" containerID="21bfab5aa96f110c8680754950dcf86412d475f5ba1d51b2d5ebd2d2f1036086" Feb 17 00:11:29 crc kubenswrapper[5109]: E0217 00:11:29.546056 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21bfab5aa96f110c8680754950dcf86412d475f5ba1d51b2d5ebd2d2f1036086\": container with ID starting with 21bfab5aa96f110c8680754950dcf86412d475f5ba1d51b2d5ebd2d2f1036086 not found: ID does not exist" containerID="21bfab5aa96f110c8680754950dcf86412d475f5ba1d51b2d5ebd2d2f1036086" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.546085 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21bfab5aa96f110c8680754950dcf86412d475f5ba1d51b2d5ebd2d2f1036086"} err="failed to get container status \"21bfab5aa96f110c8680754950dcf86412d475f5ba1d51b2d5ebd2d2f1036086\": rpc error: code = NotFound desc = could not find container \"21bfab5aa96f110c8680754950dcf86412d475f5ba1d51b2d5ebd2d2f1036086\": container with ID starting with 21bfab5aa96f110c8680754950dcf86412d475f5ba1d51b2d5ebd2d2f1036086 not found: ID does not exist" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.546104 5109 scope.go:117] "RemoveContainer" containerID="73971a2c19f7d87df4314ccc3141d6f2cbd98e4d070a7504189441ca4d6ccd45" Feb 17 00:11:29 crc kubenswrapper[5109]: E0217 00:11:29.546357 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73971a2c19f7d87df4314ccc3141d6f2cbd98e4d070a7504189441ca4d6ccd45\": container with ID starting with 73971a2c19f7d87df4314ccc3141d6f2cbd98e4d070a7504189441ca4d6ccd45 not found: ID does not exist" containerID="73971a2c19f7d87df4314ccc3141d6f2cbd98e4d070a7504189441ca4d6ccd45" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.546392 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73971a2c19f7d87df4314ccc3141d6f2cbd98e4d070a7504189441ca4d6ccd45"} err="failed to get container status \"73971a2c19f7d87df4314ccc3141d6f2cbd98e4d070a7504189441ca4d6ccd45\": rpc error: code = NotFound desc = could not find container \"73971a2c19f7d87df4314ccc3141d6f2cbd98e4d070a7504189441ca4d6ccd45\": container with ID starting with 73971a2c19f7d87df4314ccc3141d6f2cbd98e4d070a7504189441ca4d6ccd45 not found: ID does not exist" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.546419 5109 scope.go:117] "RemoveContainer" containerID="4db0df57ab76b8b0d43b8ad37679ffb2993b4596dd7e6f053dfb6f0a29e3f7c7" Feb 17 00:11:29 crc kubenswrapper[5109]: E0217 00:11:29.546623 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4db0df57ab76b8b0d43b8ad37679ffb2993b4596dd7e6f053dfb6f0a29e3f7c7\": container with ID starting with 4db0df57ab76b8b0d43b8ad37679ffb2993b4596dd7e6f053dfb6f0a29e3f7c7 not found: ID does not exist" containerID="4db0df57ab76b8b0d43b8ad37679ffb2993b4596dd7e6f053dfb6f0a29e3f7c7" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.546642 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4db0df57ab76b8b0d43b8ad37679ffb2993b4596dd7e6f053dfb6f0a29e3f7c7"} err="failed to get container status \"4db0df57ab76b8b0d43b8ad37679ffb2993b4596dd7e6f053dfb6f0a29e3f7c7\": rpc error: code = NotFound desc = could not find container \"4db0df57ab76b8b0d43b8ad37679ffb2993b4596dd7e6f053dfb6f0a29e3f7c7\": container with ID starting with 4db0df57ab76b8b0d43b8ad37679ffb2993b4596dd7e6f053dfb6f0a29e3f7c7 not found: ID does not exist" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.635478 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lh8gv"] Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.636248 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lh8gv"] Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.654848 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zf6ds"] Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.658224 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zf6ds"] Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.707923 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.733881 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm"] Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.734833 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3e6e84e-201d-45cb-a34e-351fcc111c55" containerName="kube-multus-additional-cni-plugins" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.734924 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3e6e84e-201d-45cb-a34e-351fcc111c55" containerName="kube-multus-additional-cni-plugins" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.735024 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d4c54232-fc8a-4ebb-9adb-69cc3b577b64" containerName="route-controller-manager" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.735097 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4c54232-fc8a-4ebb-9adb-69cc3b577b64" containerName="route-controller-manager" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.735170 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7c7999b4-952b-46c2-8381-459a7524cd88" containerName="extract-utilities" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.735240 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c7999b4-952b-46c2-8381-459a7524cd88" containerName="extract-utilities" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.735319 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dad5932b-d8a0-4ca7-ad78-ae817fbc3be6" containerName="registry-server" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.735390 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad5932b-d8a0-4ca7-ad78-ae817fbc3be6" containerName="registry-server" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.735462 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dad5932b-d8a0-4ca7-ad78-ae817fbc3be6" containerName="extract-utilities" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.735522 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad5932b-d8a0-4ca7-ad78-ae817fbc3be6" containerName="extract-utilities" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.735608 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7c7999b4-952b-46c2-8381-459a7524cd88" containerName="registry-server" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.735684 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c7999b4-952b-46c2-8381-459a7524cd88" containerName="registry-server" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.735755 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7c7999b4-952b-46c2-8381-459a7524cd88" containerName="extract-content" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.735824 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c7999b4-952b-46c2-8381-459a7524cd88" containerName="extract-content" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.735887 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dad5932b-d8a0-4ca7-ad78-ae817fbc3be6" containerName="extract-content" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.735953 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad5932b-d8a0-4ca7-ad78-ae817fbc3be6" containerName="extract-content" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.736134 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="7c7999b4-952b-46c2-8381-459a7524cd88" containerName="registry-server" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.736202 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3e6e84e-201d-45cb-a34e-351fcc111c55" containerName="kube-multus-additional-cni-plugins" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.736268 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="d4c54232-fc8a-4ebb-9adb-69cc3b577b64" containerName="route-controller-manager" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.736342 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="dad5932b-d8a0-4ca7-ad78-ae817fbc3be6" containerName="registry-server" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.755937 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm"] Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.756314 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.853573 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4c54232-fc8a-4ebb-9adb-69cc3b577b64-serving-cert\") pod \"d4c54232-fc8a-4ebb-9adb-69cc3b577b64\" (UID: \"d4c54232-fc8a-4ebb-9adb-69cc3b577b64\") " Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.853733 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5jbs\" (UniqueName: \"kubernetes.io/projected/d4c54232-fc8a-4ebb-9adb-69cc3b577b64-kube-api-access-b5jbs\") pod \"d4c54232-fc8a-4ebb-9adb-69cc3b577b64\" (UID: \"d4c54232-fc8a-4ebb-9adb-69cc3b577b64\") " Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.853792 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4c54232-fc8a-4ebb-9adb-69cc3b577b64-client-ca\") pod \"d4c54232-fc8a-4ebb-9adb-69cc3b577b64\" (UID: \"d4c54232-fc8a-4ebb-9adb-69cc3b577b64\") " Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.853860 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4c54232-fc8a-4ebb-9adb-69cc3b577b64-config\") pod \"d4c54232-fc8a-4ebb-9adb-69cc3b577b64\" (UID: \"d4c54232-fc8a-4ebb-9adb-69cc3b577b64\") " Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.853942 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d4c54232-fc8a-4ebb-9adb-69cc3b577b64-tmp\") pod \"d4c54232-fc8a-4ebb-9adb-69cc3b577b64\" (UID: \"d4c54232-fc8a-4ebb-9adb-69cc3b577b64\") " Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.854108 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7867b2f2-31c0-4ffd-85b3-93fefbc2e565-tmp\") pod \"route-controller-manager-656fc8d67f-vbvgm\" (UID: \"7867b2f2-31c0-4ffd-85b3-93fefbc2e565\") " pod="openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.854200 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7867b2f2-31c0-4ffd-85b3-93fefbc2e565-client-ca\") pod \"route-controller-manager-656fc8d67f-vbvgm\" (UID: \"7867b2f2-31c0-4ffd-85b3-93fefbc2e565\") " pod="openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.854241 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfsj9\" (UniqueName: \"kubernetes.io/projected/7867b2f2-31c0-4ffd-85b3-93fefbc2e565-kube-api-access-vfsj9\") pod \"route-controller-manager-656fc8d67f-vbvgm\" (UID: \"7867b2f2-31c0-4ffd-85b3-93fefbc2e565\") " pod="openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.854283 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7867b2f2-31c0-4ffd-85b3-93fefbc2e565-serving-cert\") pod \"route-controller-manager-656fc8d67f-vbvgm\" (UID: \"7867b2f2-31c0-4ffd-85b3-93fefbc2e565\") " pod="openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.854310 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7867b2f2-31c0-4ffd-85b3-93fefbc2e565-config\") pod \"route-controller-manager-656fc8d67f-vbvgm\" (UID: \"7867b2f2-31c0-4ffd-85b3-93fefbc2e565\") " pod="openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.854438 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4c54232-fc8a-4ebb-9adb-69cc3b577b64-tmp" (OuterVolumeSpecName: "tmp") pod "d4c54232-fc8a-4ebb-9adb-69cc3b577b64" (UID: "d4c54232-fc8a-4ebb-9adb-69cc3b577b64"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.854497 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4c54232-fc8a-4ebb-9adb-69cc3b577b64-client-ca" (OuterVolumeSpecName: "client-ca") pod "d4c54232-fc8a-4ebb-9adb-69cc3b577b64" (UID: "d4c54232-fc8a-4ebb-9adb-69cc3b577b64"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.854871 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4c54232-fc8a-4ebb-9adb-69cc3b577b64-config" (OuterVolumeSpecName: "config") pod "d4c54232-fc8a-4ebb-9adb-69cc3b577b64" (UID: "d4c54232-fc8a-4ebb-9adb-69cc3b577b64"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.857140 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4c54232-fc8a-4ebb-9adb-69cc3b577b64-kube-api-access-b5jbs" (OuterVolumeSpecName: "kube-api-access-b5jbs") pod "d4c54232-fc8a-4ebb-9adb-69cc3b577b64" (UID: "d4c54232-fc8a-4ebb-9adb-69cc3b577b64"). InnerVolumeSpecName "kube-api-access-b5jbs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.857130 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c54232-fc8a-4ebb-9adb-69cc3b577b64-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d4c54232-fc8a-4ebb-9adb-69cc3b577b64" (UID: "d4c54232-fc8a-4ebb-9adb-69cc3b577b64"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.911772 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79f769b86f-dknhd" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.947737 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d455c4f7d-8dmrt"] Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.948456 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="70c5f413-d875-42e1-bf81-1b616b6148f0" containerName="controller-manager" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.948472 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c5f413-d875-42e1-bf81-1b616b6148f0" containerName="controller-manager" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.948612 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="70c5f413-d875-42e1-bf81-1b616b6148f0" containerName="controller-manager" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.953022 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d455c4f7d-8dmrt"] Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.953155 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d455c4f7d-8dmrt" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.955213 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7867b2f2-31c0-4ffd-85b3-93fefbc2e565-client-ca\") pod \"route-controller-manager-656fc8d67f-vbvgm\" (UID: \"7867b2f2-31c0-4ffd-85b3-93fefbc2e565\") " pod="openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.955266 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vfsj9\" (UniqueName: \"kubernetes.io/projected/7867b2f2-31c0-4ffd-85b3-93fefbc2e565-kube-api-access-vfsj9\") pod \"route-controller-manager-656fc8d67f-vbvgm\" (UID: \"7867b2f2-31c0-4ffd-85b3-93fefbc2e565\") " pod="openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.955300 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7867b2f2-31c0-4ffd-85b3-93fefbc2e565-serving-cert\") pod \"route-controller-manager-656fc8d67f-vbvgm\" (UID: \"7867b2f2-31c0-4ffd-85b3-93fefbc2e565\") " pod="openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.955323 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7867b2f2-31c0-4ffd-85b3-93fefbc2e565-config\") pod \"route-controller-manager-656fc8d67f-vbvgm\" (UID: \"7867b2f2-31c0-4ffd-85b3-93fefbc2e565\") " pod="openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.955355 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7867b2f2-31c0-4ffd-85b3-93fefbc2e565-tmp\") pod \"route-controller-manager-656fc8d67f-vbvgm\" (UID: \"7867b2f2-31c0-4ffd-85b3-93fefbc2e565\") " pod="openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.955410 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4c54232-fc8a-4ebb-9adb-69cc3b577b64-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.955422 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b5jbs\" (UniqueName: \"kubernetes.io/projected/d4c54232-fc8a-4ebb-9adb-69cc3b577b64-kube-api-access-b5jbs\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.955431 5109 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4c54232-fc8a-4ebb-9adb-69cc3b577b64-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.955439 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4c54232-fc8a-4ebb-9adb-69cc3b577b64-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.955449 5109 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d4c54232-fc8a-4ebb-9adb-69cc3b577b64-tmp\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.955833 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7867b2f2-31c0-4ffd-85b3-93fefbc2e565-tmp\") pod \"route-controller-manager-656fc8d67f-vbvgm\" (UID: \"7867b2f2-31c0-4ffd-85b3-93fefbc2e565\") " pod="openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.956423 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7867b2f2-31c0-4ffd-85b3-93fefbc2e565-client-ca\") pod \"route-controller-manager-656fc8d67f-vbvgm\" (UID: \"7867b2f2-31c0-4ffd-85b3-93fefbc2e565\") " pod="openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.957977 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7867b2f2-31c0-4ffd-85b3-93fefbc2e565-config\") pod \"route-controller-manager-656fc8d67f-vbvgm\" (UID: \"7867b2f2-31c0-4ffd-85b3-93fefbc2e565\") " pod="openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.961126 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7867b2f2-31c0-4ffd-85b3-93fefbc2e565-serving-cert\") pod \"route-controller-manager-656fc8d67f-vbvgm\" (UID: \"7867b2f2-31c0-4ffd-85b3-93fefbc2e565\") " pod="openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm" Feb 17 00:11:29 crc kubenswrapper[5109]: I0217 00:11:29.979966 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfsj9\" (UniqueName: \"kubernetes.io/projected/7867b2f2-31c0-4ffd-85b3-93fefbc2e565-kube-api-access-vfsj9\") pod \"route-controller-manager-656fc8d67f-vbvgm\" (UID: \"7867b2f2-31c0-4ffd-85b3-93fefbc2e565\") " pod="openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.056728 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70c5f413-d875-42e1-bf81-1b616b6148f0-proxy-ca-bundles\") pod \"70c5f413-d875-42e1-bf81-1b616b6148f0\" (UID: \"70c5f413-d875-42e1-bf81-1b616b6148f0\") " Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.056869 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70c5f413-d875-42e1-bf81-1b616b6148f0-client-ca\") pod \"70c5f413-d875-42e1-bf81-1b616b6148f0\" (UID: \"70c5f413-d875-42e1-bf81-1b616b6148f0\") " Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.056906 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70c5f413-d875-42e1-bf81-1b616b6148f0-config\") pod \"70c5f413-d875-42e1-bf81-1b616b6148f0\" (UID: \"70c5f413-d875-42e1-bf81-1b616b6148f0\") " Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.056967 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/70c5f413-d875-42e1-bf81-1b616b6148f0-tmp\") pod \"70c5f413-d875-42e1-bf81-1b616b6148f0\" (UID: \"70c5f413-d875-42e1-bf81-1b616b6148f0\") " Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.056988 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncq8s\" (UniqueName: \"kubernetes.io/projected/70c5f413-d875-42e1-bf81-1b616b6148f0-kube-api-access-ncq8s\") pod \"70c5f413-d875-42e1-bf81-1b616b6148f0\" (UID: \"70c5f413-d875-42e1-bf81-1b616b6148f0\") " Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.057010 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70c5f413-d875-42e1-bf81-1b616b6148f0-serving-cert\") pod \"70c5f413-d875-42e1-bf81-1b616b6148f0\" (UID: \"70c5f413-d875-42e1-bf81-1b616b6148f0\") " Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.057158 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-tmp\") pod \"controller-manager-d455c4f7d-8dmrt\" (UID: \"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b\") " pod="openshift-controller-manager/controller-manager-d455c4f7d-8dmrt" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.057195 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbgzg\" (UniqueName: \"kubernetes.io/projected/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-kube-api-access-cbgzg\") pod \"controller-manager-d455c4f7d-8dmrt\" (UID: \"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b\") " pod="openshift-controller-manager/controller-manager-d455c4f7d-8dmrt" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.057215 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-client-ca\") pod \"controller-manager-d455c4f7d-8dmrt\" (UID: \"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b\") " pod="openshift-controller-manager/controller-manager-d455c4f7d-8dmrt" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.057233 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-proxy-ca-bundles\") pod \"controller-manager-d455c4f7d-8dmrt\" (UID: \"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b\") " pod="openshift-controller-manager/controller-manager-d455c4f7d-8dmrt" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.057261 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-serving-cert\") pod \"controller-manager-d455c4f7d-8dmrt\" (UID: \"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b\") " pod="openshift-controller-manager/controller-manager-d455c4f7d-8dmrt" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.057287 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-config\") pod \"controller-manager-d455c4f7d-8dmrt\" (UID: \"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b\") " pod="openshift-controller-manager/controller-manager-d455c4f7d-8dmrt" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.057348 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70c5f413-d875-42e1-bf81-1b616b6148f0-tmp" (OuterVolumeSpecName: "tmp") pod "70c5f413-d875-42e1-bf81-1b616b6148f0" (UID: "70c5f413-d875-42e1-bf81-1b616b6148f0"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.057913 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c5f413-d875-42e1-bf81-1b616b6148f0-config" (OuterVolumeSpecName: "config") pod "70c5f413-d875-42e1-bf81-1b616b6148f0" (UID: "70c5f413-d875-42e1-bf81-1b616b6148f0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.057924 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c5f413-d875-42e1-bf81-1b616b6148f0-client-ca" (OuterVolumeSpecName: "client-ca") pod "70c5f413-d875-42e1-bf81-1b616b6148f0" (UID: "70c5f413-d875-42e1-bf81-1b616b6148f0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.058318 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c5f413-d875-42e1-bf81-1b616b6148f0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "70c5f413-d875-42e1-bf81-1b616b6148f0" (UID: "70c5f413-d875-42e1-bf81-1b616b6148f0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.061548 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70c5f413-d875-42e1-bf81-1b616b6148f0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "70c5f413-d875-42e1-bf81-1b616b6148f0" (UID: "70c5f413-d875-42e1-bf81-1b616b6148f0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.061731 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c5f413-d875-42e1-bf81-1b616b6148f0-kube-api-access-ncq8s" (OuterVolumeSpecName: "kube-api-access-ncq8s") pod "70c5f413-d875-42e1-bf81-1b616b6148f0" (UID: "70c5f413-d875-42e1-bf81-1b616b6148f0"). InnerVolumeSpecName "kube-api-access-ncq8s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.075420 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.095757 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4s9k"] Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.096095 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p4s9k" podUID="e9b99043-6adb-499d-bcec-e0003af60fed" containerName="registry-server" containerID="cri-o://5c22d971aa9cb2f34a7c4eec583ac2c94773eecb722b129d1c8c8414b1efe504" gracePeriod=2 Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.158866 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-tmp\") pod \"controller-manager-d455c4f7d-8dmrt\" (UID: \"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b\") " pod="openshift-controller-manager/controller-manager-d455c4f7d-8dmrt" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.158922 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cbgzg\" (UniqueName: \"kubernetes.io/projected/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-kube-api-access-cbgzg\") pod \"controller-manager-d455c4f7d-8dmrt\" (UID: \"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b\") " pod="openshift-controller-manager/controller-manager-d455c4f7d-8dmrt" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.158942 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-client-ca\") pod \"controller-manager-d455c4f7d-8dmrt\" (UID: \"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b\") " pod="openshift-controller-manager/controller-manager-d455c4f7d-8dmrt" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.158964 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-proxy-ca-bundles\") pod \"controller-manager-d455c4f7d-8dmrt\" (UID: \"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b\") " pod="openshift-controller-manager/controller-manager-d455c4f7d-8dmrt" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.158994 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-serving-cert\") pod \"controller-manager-d455c4f7d-8dmrt\" (UID: \"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b\") " pod="openshift-controller-manager/controller-manager-d455c4f7d-8dmrt" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.159019 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-config\") pod \"controller-manager-d455c4f7d-8dmrt\" (UID: \"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b\") " pod="openshift-controller-manager/controller-manager-d455c4f7d-8dmrt" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.159110 5109 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70c5f413-d875-42e1-bf81-1b616b6148f0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.159122 5109 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70c5f413-d875-42e1-bf81-1b616b6148f0-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.159131 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70c5f413-d875-42e1-bf81-1b616b6148f0-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.159139 5109 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/70c5f413-d875-42e1-bf81-1b616b6148f0-tmp\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.159148 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ncq8s\" (UniqueName: \"kubernetes.io/projected/70c5f413-d875-42e1-bf81-1b616b6148f0-kube-api-access-ncq8s\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.159157 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70c5f413-d875-42e1-bf81-1b616b6148f0-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.159344 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-tmp\") pod \"controller-manager-d455c4f7d-8dmrt\" (UID: \"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b\") " pod="openshift-controller-manager/controller-manager-d455c4f7d-8dmrt" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.159829 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-client-ca\") pod \"controller-manager-d455c4f7d-8dmrt\" (UID: \"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b\") " pod="openshift-controller-manager/controller-manager-d455c4f7d-8dmrt" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.160168 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-proxy-ca-bundles\") pod \"controller-manager-d455c4f7d-8dmrt\" (UID: \"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b\") " pod="openshift-controller-manager/controller-manager-d455c4f7d-8dmrt" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.160372 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-config\") pod \"controller-manager-d455c4f7d-8dmrt\" (UID: \"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b\") " pod="openshift-controller-manager/controller-manager-d455c4f7d-8dmrt" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.165559 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-serving-cert\") pod \"controller-manager-d455c4f7d-8dmrt\" (UID: \"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b\") " pod="openshift-controller-manager/controller-manager-d455c4f7d-8dmrt" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.179227 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbgzg\" (UniqueName: \"kubernetes.io/projected/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-kube-api-access-cbgzg\") pod \"controller-manager-d455c4f7d-8dmrt\" (UID: \"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b\") " pod="openshift-controller-manager/controller-manager-d455c4f7d-8dmrt" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.269416 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm"] Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.275145 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d455c4f7d-8dmrt" Feb 17 00:11:30 crc kubenswrapper[5109]: W0217 00:11:30.285466 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7867b2f2_31c0_4ffd_85b3_93fefbc2e565.slice/crio-3e16093b22aa18f2b299239339a8883068cb647a1a0d47f0d558616033789cdb WatchSource:0}: Error finding container 3e16093b22aa18f2b299239339a8883068cb647a1a0d47f0d558616033789cdb: Status 404 returned error can't find the container with id 3e16093b22aa18f2b299239339a8883068cb647a1a0d47f0d558616033789cdb Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.351347 5109 generic.go:358] "Generic (PLEG): container finished" podID="70c5f413-d875-42e1-bf81-1b616b6148f0" containerID="06e275a6abb80454e6984a383e736a5da8125f8c077ccbe929424e7631c69c3d" exitCode=0 Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.351969 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79f769b86f-dknhd" event={"ID":"70c5f413-d875-42e1-bf81-1b616b6148f0","Type":"ContainerDied","Data":"06e275a6abb80454e6984a383e736a5da8125f8c077ccbe929424e7631c69c3d"} Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.352008 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79f769b86f-dknhd" event={"ID":"70c5f413-d875-42e1-bf81-1b616b6148f0","Type":"ContainerDied","Data":"5a0891ebc22c5141ca976870c56323b58559cd2919b21826f0800d3a6f508dc1"} Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.352069 5109 scope.go:117] "RemoveContainer" containerID="06e275a6abb80454e6984a383e736a5da8125f8c077ccbe929424e7631c69c3d" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.359783 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79f769b86f-dknhd" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.394438 5109 generic.go:358] "Generic (PLEG): container finished" podID="d4c54232-fc8a-4ebb-9adb-69cc3b577b64" containerID="c9981b57d1b0c22988c02cfa5cbd4e4b80f710724645d856a0089491efe6a1cb" exitCode=0 Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.394538 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk" event={"ID":"d4c54232-fc8a-4ebb-9adb-69cc3b577b64","Type":"ContainerDied","Data":"c9981b57d1b0c22988c02cfa5cbd4e4b80f710724645d856a0089491efe6a1cb"} Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.394571 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk" event={"ID":"d4c54232-fc8a-4ebb-9adb-69cc3b577b64","Type":"ContainerDied","Data":"d757926f0eea58558d31b62dd28398056ebf85f956fd743b8dd0a83ee97337c4"} Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.394689 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.397729 5109 generic.go:358] "Generic (PLEG): container finished" podID="e9b99043-6adb-499d-bcec-e0003af60fed" containerID="5c22d971aa9cb2f34a7c4eec583ac2c94773eecb722b129d1c8c8414b1efe504" exitCode=0 Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.397836 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4s9k" event={"ID":"e9b99043-6adb-499d-bcec-e0003af60fed","Type":"ContainerDied","Data":"5c22d971aa9cb2f34a7c4eec583ac2c94773eecb722b129d1c8c8414b1efe504"} Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.403912 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm" event={"ID":"7867b2f2-31c0-4ffd-85b3-93fefbc2e565","Type":"ContainerStarted","Data":"3e16093b22aa18f2b299239339a8883068cb647a1a0d47f0d558616033789cdb"} Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.408804 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-79f769b86f-dknhd"] Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.409581 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4s9k" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.412546 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-79f769b86f-dknhd"] Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.421431 5109 scope.go:117] "RemoveContainer" containerID="06e275a6abb80454e6984a383e736a5da8125f8c077ccbe929424e7631c69c3d" Feb 17 00:11:30 crc kubenswrapper[5109]: E0217 00:11:30.429566 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06e275a6abb80454e6984a383e736a5da8125f8c077ccbe929424e7631c69c3d\": container with ID starting with 06e275a6abb80454e6984a383e736a5da8125f8c077ccbe929424e7631c69c3d not found: ID does not exist" containerID="06e275a6abb80454e6984a383e736a5da8125f8c077ccbe929424e7631c69c3d" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.429754 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06e275a6abb80454e6984a383e736a5da8125f8c077ccbe929424e7631c69c3d"} err="failed to get container status \"06e275a6abb80454e6984a383e736a5da8125f8c077ccbe929424e7631c69c3d\": rpc error: code = NotFound desc = could not find container \"06e275a6abb80454e6984a383e736a5da8125f8c077ccbe929424e7631c69c3d\": container with ID starting with 06e275a6abb80454e6984a383e736a5da8125f8c077ccbe929424e7631c69c3d not found: ID does not exist" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.429848 5109 scope.go:117] "RemoveContainer" containerID="c9981b57d1b0c22988c02cfa5cbd4e4b80f710724645d856a0089491efe6a1cb" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.436826 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk"] Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.438858 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85c68776d9-c98pk"] Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.454856 5109 scope.go:117] "RemoveContainer" containerID="c9981b57d1b0c22988c02cfa5cbd4e4b80f710724645d856a0089491efe6a1cb" Feb 17 00:11:30 crc kubenswrapper[5109]: E0217 00:11:30.457167 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9981b57d1b0c22988c02cfa5cbd4e4b80f710724645d856a0089491efe6a1cb\": container with ID starting with c9981b57d1b0c22988c02cfa5cbd4e4b80f710724645d856a0089491efe6a1cb not found: ID does not exist" containerID="c9981b57d1b0c22988c02cfa5cbd4e4b80f710724645d856a0089491efe6a1cb" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.457208 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9981b57d1b0c22988c02cfa5cbd4e4b80f710724645d856a0089491efe6a1cb"} err="failed to get container status \"c9981b57d1b0c22988c02cfa5cbd4e4b80f710724645d856a0089491efe6a1cb\": rpc error: code = NotFound desc = could not find container \"c9981b57d1b0c22988c02cfa5cbd4e4b80f710724645d856a0089491efe6a1cb\": container with ID starting with c9981b57d1b0c22988c02cfa5cbd4e4b80f710724645d856a0089491efe6a1cb not found: ID does not exist" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.567184 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7phv\" (UniqueName: \"kubernetes.io/projected/e9b99043-6adb-499d-bcec-e0003af60fed-kube-api-access-q7phv\") pod \"e9b99043-6adb-499d-bcec-e0003af60fed\" (UID: \"e9b99043-6adb-499d-bcec-e0003af60fed\") " Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.567663 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b99043-6adb-499d-bcec-e0003af60fed-catalog-content\") pod \"e9b99043-6adb-499d-bcec-e0003af60fed\" (UID: \"e9b99043-6adb-499d-bcec-e0003af60fed\") " Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.567755 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b99043-6adb-499d-bcec-e0003af60fed-utilities\") pod \"e9b99043-6adb-499d-bcec-e0003af60fed\" (UID: \"e9b99043-6adb-499d-bcec-e0003af60fed\") " Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.568642 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9b99043-6adb-499d-bcec-e0003af60fed-utilities" (OuterVolumeSpecName: "utilities") pod "e9b99043-6adb-499d-bcec-e0003af60fed" (UID: "e9b99043-6adb-499d-bcec-e0003af60fed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.572601 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9b99043-6adb-499d-bcec-e0003af60fed-kube-api-access-q7phv" (OuterVolumeSpecName: "kube-api-access-q7phv") pod "e9b99043-6adb-499d-bcec-e0003af60fed" (UID: "e9b99043-6adb-499d-bcec-e0003af60fed"). InnerVolumeSpecName "kube-api-access-q7phv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.585008 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9b99043-6adb-499d-bcec-e0003af60fed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9b99043-6adb-499d-bcec-e0003af60fed" (UID: "e9b99043-6adb-499d-bcec-e0003af60fed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:11:30 crc kubenswrapper[5109]: E0217 00:11:30.594057 5109 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcfae8bf_91d7_48d3_a978_1510fe282c92.slice/crio-504f6d52bccdfdf7124ab1ca39aad9207a6e2b1d13c90c68a4129a240f30faab.scope\": RecentStats: unable to find data in memory cache]" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.668759 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b99043-6adb-499d-bcec-e0003af60fed-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.668804 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b99043-6adb-499d-bcec-e0003af60fed-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.668818 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q7phv\" (UniqueName: \"kubernetes.io/projected/e9b99043-6adb-499d-bcec-e0003af60fed-kube-api-access-q7phv\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:30 crc kubenswrapper[5109]: I0217 00:11:30.717818 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d455c4f7d-8dmrt"] Feb 17 00:11:30 crc kubenswrapper[5109]: W0217 00:11:30.727263 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5799e5b7_f76e_4e21_8f3c_b7b45a7d949b.slice/crio-2c27e763d205afab0309361d4c704aac4450aae4682bde77e275737504310a2f WatchSource:0}: Error finding container 2c27e763d205afab0309361d4c704aac4450aae4682bde77e275737504310a2f: Status 404 returned error can't find the container with id 2c27e763d205afab0309361d4c704aac4450aae4682bde77e275737504310a2f Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.336644 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.338146 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e9b99043-6adb-499d-bcec-e0003af60fed" containerName="extract-content" Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.338325 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b99043-6adb-499d-bcec-e0003af60fed" containerName="extract-content" Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.338347 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e9b99043-6adb-499d-bcec-e0003af60fed" containerName="extract-utilities" Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.338355 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b99043-6adb-499d-bcec-e0003af60fed" containerName="extract-utilities" Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.338382 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e9b99043-6adb-499d-bcec-e0003af60fed" containerName="registry-server" Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.338389 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b99043-6adb-499d-bcec-e0003af60fed" containerName="registry-server" Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.338553 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="e9b99043-6adb-499d-bcec-e0003af60fed" containerName="registry-server" Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.354850 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.355015 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.357183 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.357437 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.412308 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4s9k" event={"ID":"e9b99043-6adb-499d-bcec-e0003af60fed","Type":"ContainerDied","Data":"8e47ffd8c06d5c488c757eed0fbeaab611084fc6876e96e67e2da877d10d78eb"} Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.412355 5109 scope.go:117] "RemoveContainer" containerID="5c22d971aa9cb2f34a7c4eec583ac2c94773eecb722b129d1c8c8414b1efe504" Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.412502 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4s9k" Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.419420 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm" event={"ID":"7867b2f2-31c0-4ffd-85b3-93fefbc2e565","Type":"ContainerStarted","Data":"dd11a53a7f9f5abe1ad2902125a77d5e44b1af5666ecd595685cdc65c8323a14"} Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.419878 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm" Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.420847 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d455c4f7d-8dmrt" event={"ID":"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b","Type":"ContainerStarted","Data":"ec54c3f0f5eeba0a412c73a5c34a716d4df31b64b3aae5cd08e54fdbb31e6029"} Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.420881 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d455c4f7d-8dmrt" event={"ID":"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b","Type":"ContainerStarted","Data":"2c27e763d205afab0309361d4c704aac4450aae4682bde77e275737504310a2f"} Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.421226 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-d455c4f7d-8dmrt" Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.425787 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm" Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.441553 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm" podStartSLOduration=2.441531558 podStartE2EDuration="2.441531558s" podCreationTimestamp="2026-02-17 00:11:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:11:31.438153639 +0000 UTC m=+162.769708397" watchObservedRunningTime="2026-02-17 00:11:31.441531558 +0000 UTC m=+162.773086316" Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.442221 5109 scope.go:117] "RemoveContainer" containerID="04b335a082b4ffe274a23ce35b678e48f661c86f726964240fa3431ec2cdbb89" Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.473541 5109 scope.go:117] "RemoveContainer" containerID="a6474468d5f834ed0b5af1d3b321d7378481b3403bc88aacf65293efdc5fa9f7" Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.481428 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01f188b1-5fbb-4eb7-8890-d72012fdc785-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"01f188b1-5fbb-4eb7-8890-d72012fdc785\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.481492 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01f188b1-5fbb-4eb7-8890-d72012fdc785-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"01f188b1-5fbb-4eb7-8890-d72012fdc785\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.491696 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70c5f413-d875-42e1-bf81-1b616b6148f0" path="/var/lib/kubelet/pods/70c5f413-d875-42e1-bf81-1b616b6148f0/volumes" Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.492429 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c7999b4-952b-46c2-8381-459a7524cd88" path="/var/lib/kubelet/pods/7c7999b4-952b-46c2-8381-459a7524cd88/volumes" Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.493211 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4c54232-fc8a-4ebb-9adb-69cc3b577b64" path="/var/lib/kubelet/pods/d4c54232-fc8a-4ebb-9adb-69cc3b577b64/volumes" Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.494393 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dad5932b-d8a0-4ca7-ad78-ae817fbc3be6" path="/var/lib/kubelet/pods/dad5932b-d8a0-4ca7-ad78-ae817fbc3be6/volumes" Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.495066 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4s9k"] Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.495099 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4s9k"] Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.499810 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d455c4f7d-8dmrt" podStartSLOduration=2.499755271 podStartE2EDuration="2.499755271s" podCreationTimestamp="2026-02-17 00:11:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:11:31.499157535 +0000 UTC m=+162.830712303" watchObservedRunningTime="2026-02-17 00:11:31.499755271 +0000 UTC m=+162.831310039" Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.582955 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01f188b1-5fbb-4eb7-8890-d72012fdc785-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"01f188b1-5fbb-4eb7-8890-d72012fdc785\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.583276 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01f188b1-5fbb-4eb7-8890-d72012fdc785-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"01f188b1-5fbb-4eb7-8890-d72012fdc785\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.583338 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01f188b1-5fbb-4eb7-8890-d72012fdc785-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"01f188b1-5fbb-4eb7-8890-d72012fdc785\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.607472 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01f188b1-5fbb-4eb7-8890-d72012fdc785-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"01f188b1-5fbb-4eb7-8890-d72012fdc785\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.678481 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d455c4f7d-8dmrt" Feb 17 00:11:31 crc kubenswrapper[5109]: I0217 00:11:31.679679 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 17 00:11:32 crc kubenswrapper[5109]: I0217 00:11:32.087214 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 17 00:11:32 crc kubenswrapper[5109]: I0217 00:11:32.427495 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"01f188b1-5fbb-4eb7-8890-d72012fdc785","Type":"ContainerStarted","Data":"a25e12d730fb7e2a17b832ebe9e8fb5c7a380f1a596995fffda579fe0d75b9fd"} Feb 17 00:11:32 crc kubenswrapper[5109]: I0217 00:11:32.427746 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"01f188b1-5fbb-4eb7-8890-d72012fdc785","Type":"ContainerStarted","Data":"3cf6421eb6ca71718f76b3a4fb9da627a471826e04939eac664119559d1c5bf5"} Feb 17 00:11:32 crc kubenswrapper[5109]: I0217 00:11:32.445844 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=1.4458206200000001 podStartE2EDuration="1.44582062s" podCreationTimestamp="2026-02-17 00:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:11:32.441911127 +0000 UTC m=+163.773465885" watchObservedRunningTime="2026-02-17 00:11:32.44582062 +0000 UTC m=+163.777375368" Feb 17 00:11:32 crc kubenswrapper[5109]: I0217 00:11:32.487927 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-247nb"] Feb 17 00:11:32 crc kubenswrapper[5109]: I0217 00:11:32.488222 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-247nb" podUID="a0800be8-a032-4a80-b5f3-6f7b11ef439e" containerName="registry-server" containerID="cri-o://97593faabb8a3c5a39484208418b09df4f6cbf38edf35f1d78c3fc243a9ed823" gracePeriod=2 Feb 17 00:11:32 crc kubenswrapper[5109]: I0217 00:11:32.861693 5109 ???:1] "http: TLS handshake error from 192.168.126.11:37408: no serving certificate available for the kubelet" Feb 17 00:11:32 crc kubenswrapper[5109]: I0217 00:11:32.871737 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-247nb" Feb 17 00:11:33 crc kubenswrapper[5109]: I0217 00:11:33.036492 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbwrc\" (UniqueName: \"kubernetes.io/projected/a0800be8-a032-4a80-b5f3-6f7b11ef439e-kube-api-access-rbwrc\") pod \"a0800be8-a032-4a80-b5f3-6f7b11ef439e\" (UID: \"a0800be8-a032-4a80-b5f3-6f7b11ef439e\") " Feb 17 00:11:33 crc kubenswrapper[5109]: I0217 00:11:33.036575 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0800be8-a032-4a80-b5f3-6f7b11ef439e-catalog-content\") pod \"a0800be8-a032-4a80-b5f3-6f7b11ef439e\" (UID: \"a0800be8-a032-4a80-b5f3-6f7b11ef439e\") " Feb 17 00:11:33 crc kubenswrapper[5109]: I0217 00:11:33.036632 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0800be8-a032-4a80-b5f3-6f7b11ef439e-utilities\") pod \"a0800be8-a032-4a80-b5f3-6f7b11ef439e\" (UID: \"a0800be8-a032-4a80-b5f3-6f7b11ef439e\") " Feb 17 00:11:33 crc kubenswrapper[5109]: I0217 00:11:33.038296 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0800be8-a032-4a80-b5f3-6f7b11ef439e-utilities" (OuterVolumeSpecName: "utilities") pod "a0800be8-a032-4a80-b5f3-6f7b11ef439e" (UID: "a0800be8-a032-4a80-b5f3-6f7b11ef439e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:11:33 crc kubenswrapper[5109]: I0217 00:11:33.043076 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0800be8-a032-4a80-b5f3-6f7b11ef439e-kube-api-access-rbwrc" (OuterVolumeSpecName: "kube-api-access-rbwrc") pod "a0800be8-a032-4a80-b5f3-6f7b11ef439e" (UID: "a0800be8-a032-4a80-b5f3-6f7b11ef439e"). InnerVolumeSpecName "kube-api-access-rbwrc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:11:33 crc kubenswrapper[5109]: I0217 00:11:33.137989 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rbwrc\" (UniqueName: \"kubernetes.io/projected/a0800be8-a032-4a80-b5f3-6f7b11ef439e-kube-api-access-rbwrc\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:33 crc kubenswrapper[5109]: I0217 00:11:33.138160 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0800be8-a032-4a80-b5f3-6f7b11ef439e-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:33 crc kubenswrapper[5109]: I0217 00:11:33.142090 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0800be8-a032-4a80-b5f3-6f7b11ef439e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0800be8-a032-4a80-b5f3-6f7b11ef439e" (UID: "a0800be8-a032-4a80-b5f3-6f7b11ef439e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:11:33 crc kubenswrapper[5109]: I0217 00:11:33.239885 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0800be8-a032-4a80-b5f3-6f7b11ef439e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:33 crc kubenswrapper[5109]: I0217 00:11:33.440510 5109 generic.go:358] "Generic (PLEG): container finished" podID="01f188b1-5fbb-4eb7-8890-d72012fdc785" containerID="a25e12d730fb7e2a17b832ebe9e8fb5c7a380f1a596995fffda579fe0d75b9fd" exitCode=0 Feb 17 00:11:33 crc kubenswrapper[5109]: I0217 00:11:33.440754 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"01f188b1-5fbb-4eb7-8890-d72012fdc785","Type":"ContainerDied","Data":"a25e12d730fb7e2a17b832ebe9e8fb5c7a380f1a596995fffda579fe0d75b9fd"} Feb 17 00:11:33 crc kubenswrapper[5109]: I0217 00:11:33.443207 5109 generic.go:358] "Generic (PLEG): container finished" podID="a0800be8-a032-4a80-b5f3-6f7b11ef439e" containerID="97593faabb8a3c5a39484208418b09df4f6cbf38edf35f1d78c3fc243a9ed823" exitCode=0 Feb 17 00:11:33 crc kubenswrapper[5109]: I0217 00:11:33.443375 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-247nb" event={"ID":"a0800be8-a032-4a80-b5f3-6f7b11ef439e","Type":"ContainerDied","Data":"97593faabb8a3c5a39484208418b09df4f6cbf38edf35f1d78c3fc243a9ed823"} Feb 17 00:11:33 crc kubenswrapper[5109]: I0217 00:11:33.443404 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-247nb" event={"ID":"a0800be8-a032-4a80-b5f3-6f7b11ef439e","Type":"ContainerDied","Data":"7033f67a40935eea5cfaa5f42412b63f68946494436fb043021a0c2cf459c1d5"} Feb 17 00:11:33 crc kubenswrapper[5109]: I0217 00:11:33.443442 5109 scope.go:117] "RemoveContainer" containerID="97593faabb8a3c5a39484208418b09df4f6cbf38edf35f1d78c3fc243a9ed823" Feb 17 00:11:33 crc kubenswrapper[5109]: I0217 00:11:33.444053 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-247nb" Feb 17 00:11:33 crc kubenswrapper[5109]: I0217 00:11:33.465773 5109 scope.go:117] "RemoveContainer" containerID="85f61c78176c87bf8b277ad31775ac15969f29e54ad2445165032e593fe5c083" Feb 17 00:11:33 crc kubenswrapper[5109]: I0217 00:11:33.474184 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9b99043-6adb-499d-bcec-e0003af60fed" path="/var/lib/kubelet/pods/e9b99043-6adb-499d-bcec-e0003af60fed/volumes" Feb 17 00:11:33 crc kubenswrapper[5109]: I0217 00:11:33.474943 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-247nb"] Feb 17 00:11:33 crc kubenswrapper[5109]: I0217 00:11:33.474977 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-247nb"] Feb 17 00:11:33 crc kubenswrapper[5109]: I0217 00:11:33.498650 5109 scope.go:117] "RemoveContainer" containerID="088f8c9bf8a34586d06a895018006d5101649d4cc6de4d628221ac002d3e0e13" Feb 17 00:11:33 crc kubenswrapper[5109]: I0217 00:11:33.519341 5109 scope.go:117] "RemoveContainer" containerID="97593faabb8a3c5a39484208418b09df4f6cbf38edf35f1d78c3fc243a9ed823" Feb 17 00:11:33 crc kubenswrapper[5109]: E0217 00:11:33.520031 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97593faabb8a3c5a39484208418b09df4f6cbf38edf35f1d78c3fc243a9ed823\": container with ID starting with 97593faabb8a3c5a39484208418b09df4f6cbf38edf35f1d78c3fc243a9ed823 not found: ID does not exist" containerID="97593faabb8a3c5a39484208418b09df4f6cbf38edf35f1d78c3fc243a9ed823" Feb 17 00:11:33 crc kubenswrapper[5109]: I0217 00:11:33.520076 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97593faabb8a3c5a39484208418b09df4f6cbf38edf35f1d78c3fc243a9ed823"} err="failed to get container status \"97593faabb8a3c5a39484208418b09df4f6cbf38edf35f1d78c3fc243a9ed823\": rpc error: code = NotFound desc = could not find container \"97593faabb8a3c5a39484208418b09df4f6cbf38edf35f1d78c3fc243a9ed823\": container with ID starting with 97593faabb8a3c5a39484208418b09df4f6cbf38edf35f1d78c3fc243a9ed823 not found: ID does not exist" Feb 17 00:11:33 crc kubenswrapper[5109]: I0217 00:11:33.520105 5109 scope.go:117] "RemoveContainer" containerID="85f61c78176c87bf8b277ad31775ac15969f29e54ad2445165032e593fe5c083" Feb 17 00:11:33 crc kubenswrapper[5109]: E0217 00:11:33.520525 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85f61c78176c87bf8b277ad31775ac15969f29e54ad2445165032e593fe5c083\": container with ID starting with 85f61c78176c87bf8b277ad31775ac15969f29e54ad2445165032e593fe5c083 not found: ID does not exist" containerID="85f61c78176c87bf8b277ad31775ac15969f29e54ad2445165032e593fe5c083" Feb 17 00:11:33 crc kubenswrapper[5109]: I0217 00:11:33.520625 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85f61c78176c87bf8b277ad31775ac15969f29e54ad2445165032e593fe5c083"} err="failed to get container status \"85f61c78176c87bf8b277ad31775ac15969f29e54ad2445165032e593fe5c083\": rpc error: code = NotFound desc = could not find container \"85f61c78176c87bf8b277ad31775ac15969f29e54ad2445165032e593fe5c083\": container with ID starting with 85f61c78176c87bf8b277ad31775ac15969f29e54ad2445165032e593fe5c083 not found: ID does not exist" Feb 17 00:11:33 crc kubenswrapper[5109]: I0217 00:11:33.520707 5109 scope.go:117] "RemoveContainer" containerID="088f8c9bf8a34586d06a895018006d5101649d4cc6de4d628221ac002d3e0e13" Feb 17 00:11:33 crc kubenswrapper[5109]: E0217 00:11:33.521782 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"088f8c9bf8a34586d06a895018006d5101649d4cc6de4d628221ac002d3e0e13\": container with ID starting with 088f8c9bf8a34586d06a895018006d5101649d4cc6de4d628221ac002d3e0e13 not found: ID does not exist" containerID="088f8c9bf8a34586d06a895018006d5101649d4cc6de4d628221ac002d3e0e13" Feb 17 00:11:33 crc kubenswrapper[5109]: I0217 00:11:33.521832 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"088f8c9bf8a34586d06a895018006d5101649d4cc6de4d628221ac002d3e0e13"} err="failed to get container status \"088f8c9bf8a34586d06a895018006d5101649d4cc6de4d628221ac002d3e0e13\": rpc error: code = NotFound desc = could not find container \"088f8c9bf8a34586d06a895018006d5101649d4cc6de4d628221ac002d3e0e13\": container with ID starting with 088f8c9bf8a34586d06a895018006d5101649d4cc6de4d628221ac002d3e0e13 not found: ID does not exist" Feb 17 00:11:34 crc kubenswrapper[5109]: I0217 00:11:34.665473 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 17 00:11:34 crc kubenswrapper[5109]: I0217 00:11:34.764111 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01f188b1-5fbb-4eb7-8890-d72012fdc785-kube-api-access\") pod \"01f188b1-5fbb-4eb7-8890-d72012fdc785\" (UID: \"01f188b1-5fbb-4eb7-8890-d72012fdc785\") " Feb 17 00:11:34 crc kubenswrapper[5109]: I0217 00:11:34.764346 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01f188b1-5fbb-4eb7-8890-d72012fdc785-kubelet-dir\") pod \"01f188b1-5fbb-4eb7-8890-d72012fdc785\" (UID: \"01f188b1-5fbb-4eb7-8890-d72012fdc785\") " Feb 17 00:11:34 crc kubenswrapper[5109]: I0217 00:11:34.764564 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01f188b1-5fbb-4eb7-8890-d72012fdc785-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "01f188b1-5fbb-4eb7-8890-d72012fdc785" (UID: "01f188b1-5fbb-4eb7-8890-d72012fdc785"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 17 00:11:34 crc kubenswrapper[5109]: I0217 00:11:34.769283 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01f188b1-5fbb-4eb7-8890-d72012fdc785-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "01f188b1-5fbb-4eb7-8890-d72012fdc785" (UID: "01f188b1-5fbb-4eb7-8890-d72012fdc785"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:11:34 crc kubenswrapper[5109]: I0217 00:11:34.865740 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01f188b1-5fbb-4eb7-8890-d72012fdc785-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:34 crc kubenswrapper[5109]: I0217 00:11:34.865775 5109 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01f188b1-5fbb-4eb7-8890-d72012fdc785-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:35 crc kubenswrapper[5109]: I0217 00:11:35.461067 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 17 00:11:35 crc kubenswrapper[5109]: I0217 00:11:35.461074 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"01f188b1-5fbb-4eb7-8890-d72012fdc785","Type":"ContainerDied","Data":"3cf6421eb6ca71718f76b3a4fb9da627a471826e04939eac664119559d1c5bf5"} Feb 17 00:11:35 crc kubenswrapper[5109]: I0217 00:11:35.461117 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cf6421eb6ca71718f76b3a4fb9da627a471826e04939eac664119559d1c5bf5" Feb 17 00:11:35 crc kubenswrapper[5109]: I0217 00:11:35.470680 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0800be8-a032-4a80-b5f3-6f7b11ef439e" path="/var/lib/kubelet/pods/a0800be8-a032-4a80-b5f3-6f7b11ef439e/volumes" Feb 17 00:11:38 crc kubenswrapper[5109]: I0217 00:11:38.534737 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 17 00:11:38 crc kubenswrapper[5109]: I0217 00:11:38.535456 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a0800be8-a032-4a80-b5f3-6f7b11ef439e" containerName="extract-content" Feb 17 00:11:38 crc kubenswrapper[5109]: I0217 00:11:38.535469 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0800be8-a032-4a80-b5f3-6f7b11ef439e" containerName="extract-content" Feb 17 00:11:38 crc kubenswrapper[5109]: I0217 00:11:38.535479 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a0800be8-a032-4a80-b5f3-6f7b11ef439e" containerName="registry-server" Feb 17 00:11:38 crc kubenswrapper[5109]: I0217 00:11:38.535484 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0800be8-a032-4a80-b5f3-6f7b11ef439e" containerName="registry-server" Feb 17 00:11:38 crc kubenswrapper[5109]: I0217 00:11:38.535500 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="01f188b1-5fbb-4eb7-8890-d72012fdc785" containerName="pruner" Feb 17 00:11:38 crc kubenswrapper[5109]: I0217 00:11:38.535507 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f188b1-5fbb-4eb7-8890-d72012fdc785" containerName="pruner" Feb 17 00:11:38 crc kubenswrapper[5109]: I0217 00:11:38.535525 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a0800be8-a032-4a80-b5f3-6f7b11ef439e" containerName="extract-utilities" Feb 17 00:11:38 crc kubenswrapper[5109]: I0217 00:11:38.535530 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0800be8-a032-4a80-b5f3-6f7b11ef439e" containerName="extract-utilities" Feb 17 00:11:38 crc kubenswrapper[5109]: I0217 00:11:38.535672 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="a0800be8-a032-4a80-b5f3-6f7b11ef439e" containerName="registry-server" Feb 17 00:11:38 crc kubenswrapper[5109]: I0217 00:11:38.535684 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="01f188b1-5fbb-4eb7-8890-d72012fdc785" containerName="pruner" Feb 17 00:11:38 crc kubenswrapper[5109]: I0217 00:11:38.557887 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 17 00:11:38 crc kubenswrapper[5109]: I0217 00:11:38.558040 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 17 00:11:38 crc kubenswrapper[5109]: I0217 00:11:38.561374 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Feb 17 00:11:38 crc kubenswrapper[5109]: I0217 00:11:38.562284 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Feb 17 00:11:38 crc kubenswrapper[5109]: I0217 00:11:38.611984 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30062c3d-5b20-47fc-afb7-8c62131e36ac-kube-api-access\") pod \"installer-12-crc\" (UID: \"30062c3d-5b20-47fc-afb7-8c62131e36ac\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 17 00:11:38 crc kubenswrapper[5109]: I0217 00:11:38.612041 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/30062c3d-5b20-47fc-afb7-8c62131e36ac-var-lock\") pod \"installer-12-crc\" (UID: \"30062c3d-5b20-47fc-afb7-8c62131e36ac\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 17 00:11:38 crc kubenswrapper[5109]: I0217 00:11:38.612070 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/30062c3d-5b20-47fc-afb7-8c62131e36ac-kubelet-dir\") pod \"installer-12-crc\" (UID: \"30062c3d-5b20-47fc-afb7-8c62131e36ac\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 17 00:11:38 crc kubenswrapper[5109]: I0217 00:11:38.713704 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/30062c3d-5b20-47fc-afb7-8c62131e36ac-var-lock\") pod \"installer-12-crc\" (UID: \"30062c3d-5b20-47fc-afb7-8c62131e36ac\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 17 00:11:38 crc kubenswrapper[5109]: I0217 00:11:38.713780 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/30062c3d-5b20-47fc-afb7-8c62131e36ac-kubelet-dir\") pod \"installer-12-crc\" (UID: \"30062c3d-5b20-47fc-afb7-8c62131e36ac\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 17 00:11:38 crc kubenswrapper[5109]: I0217 00:11:38.713861 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/30062c3d-5b20-47fc-afb7-8c62131e36ac-var-lock\") pod \"installer-12-crc\" (UID: \"30062c3d-5b20-47fc-afb7-8c62131e36ac\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 17 00:11:38 crc kubenswrapper[5109]: I0217 00:11:38.713968 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/30062c3d-5b20-47fc-afb7-8c62131e36ac-kubelet-dir\") pod \"installer-12-crc\" (UID: \"30062c3d-5b20-47fc-afb7-8c62131e36ac\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 17 00:11:38 crc kubenswrapper[5109]: I0217 00:11:38.714005 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30062c3d-5b20-47fc-afb7-8c62131e36ac-kube-api-access\") pod \"installer-12-crc\" (UID: \"30062c3d-5b20-47fc-afb7-8c62131e36ac\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 17 00:11:38 crc kubenswrapper[5109]: I0217 00:11:38.733755 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30062c3d-5b20-47fc-afb7-8c62131e36ac-kube-api-access\") pod \"installer-12-crc\" (UID: \"30062c3d-5b20-47fc-afb7-8c62131e36ac\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 17 00:11:38 crc kubenswrapper[5109]: I0217 00:11:38.932742 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 17 00:11:39 crc kubenswrapper[5109]: I0217 00:11:39.340229 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 17 00:11:39 crc kubenswrapper[5109]: I0217 00:11:39.488021 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"30062c3d-5b20-47fc-afb7-8c62131e36ac","Type":"ContainerStarted","Data":"1d8d7fe706871f68545c24cd2659290b0885d973f463e4b2871875048f1b3a96"} Feb 17 00:11:40 crc kubenswrapper[5109]: I0217 00:11:40.505816 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"30062c3d-5b20-47fc-afb7-8c62131e36ac","Type":"ContainerStarted","Data":"8a1e4248b7cabc0481354c0d510e0d363fcb5d1b78eed518703c75338290ce0d"} Feb 17 00:11:40 crc kubenswrapper[5109]: E0217 00:11:40.732366 5109 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcfae8bf_91d7_48d3_a978_1510fe282c92.slice/crio-504f6d52bccdfdf7124ab1ca39aad9207a6e2b1d13c90c68a4129a240f30faab.scope\": RecentStats: unable to find data in memory cache]" Feb 17 00:11:46 crc kubenswrapper[5109]: I0217 00:11:46.373544 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 17 00:11:46 crc kubenswrapper[5109]: I0217 00:11:46.398427 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=8.398406727 podStartE2EDuration="8.398406727s" podCreationTimestamp="2026-02-17 00:11:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:11:40.527536723 +0000 UTC m=+171.859091491" watchObservedRunningTime="2026-02-17 00:11:46.398406727 +0000 UTC m=+177.729961495" Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.209677 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d455c4f7d-8dmrt"] Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.210294 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d455c4f7d-8dmrt" podUID="5799e5b7-f76e-4e21-8f3c-b7b45a7d949b" containerName="controller-manager" containerID="cri-o://ec54c3f0f5eeba0a412c73a5c34a716d4df31b64b3aae5cd08e54fdbb31e6029" gracePeriod=30 Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.229265 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm"] Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.229644 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm" podUID="7867b2f2-31c0-4ffd-85b3-93fefbc2e565" containerName="route-controller-manager" containerID="cri-o://dd11a53a7f9f5abe1ad2902125a77d5e44b1af5666ecd595685cdc65c8323a14" gracePeriod=30 Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.565193 5109 generic.go:358] "Generic (PLEG): container finished" podID="5799e5b7-f76e-4e21-8f3c-b7b45a7d949b" containerID="ec54c3f0f5eeba0a412c73a5c34a716d4df31b64b3aae5cd08e54fdbb31e6029" exitCode=0 Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.565361 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d455c4f7d-8dmrt" event={"ID":"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b","Type":"ContainerDied","Data":"ec54c3f0f5eeba0a412c73a5c34a716d4df31b64b3aae5cd08e54fdbb31e6029"} Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.567578 5109 generic.go:358] "Generic (PLEG): container finished" podID="7867b2f2-31c0-4ffd-85b3-93fefbc2e565" containerID="dd11a53a7f9f5abe1ad2902125a77d5e44b1af5666ecd595685cdc65c8323a14" exitCode=0 Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.567679 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm" event={"ID":"7867b2f2-31c0-4ffd-85b3-93fefbc2e565","Type":"ContainerDied","Data":"dd11a53a7f9f5abe1ad2902125a77d5e44b1af5666ecd595685cdc65c8323a14"} Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.747919 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm" Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.778016 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62"] Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.778774 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7867b2f2-31c0-4ffd-85b3-93fefbc2e565" containerName="route-controller-manager" Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.778798 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="7867b2f2-31c0-4ffd-85b3-93fefbc2e565" containerName="route-controller-manager" Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.778932 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="7867b2f2-31c0-4ffd-85b3-93fefbc2e565" containerName="route-controller-manager" Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.789767 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62"] Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.789781 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62" Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.896172 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7867b2f2-31c0-4ffd-85b3-93fefbc2e565-tmp\") pod \"7867b2f2-31c0-4ffd-85b3-93fefbc2e565\" (UID: \"7867b2f2-31c0-4ffd-85b3-93fefbc2e565\") " Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.896215 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7867b2f2-31c0-4ffd-85b3-93fefbc2e565-client-ca\") pod \"7867b2f2-31c0-4ffd-85b3-93fefbc2e565\" (UID: \"7867b2f2-31c0-4ffd-85b3-93fefbc2e565\") " Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.896271 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7867b2f2-31c0-4ffd-85b3-93fefbc2e565-serving-cert\") pod \"7867b2f2-31c0-4ffd-85b3-93fefbc2e565\" (UID: \"7867b2f2-31c0-4ffd-85b3-93fefbc2e565\") " Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.896359 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7867b2f2-31c0-4ffd-85b3-93fefbc2e565-config\") pod \"7867b2f2-31c0-4ffd-85b3-93fefbc2e565\" (UID: \"7867b2f2-31c0-4ffd-85b3-93fefbc2e565\") " Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.896462 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfsj9\" (UniqueName: \"kubernetes.io/projected/7867b2f2-31c0-4ffd-85b3-93fefbc2e565-kube-api-access-vfsj9\") pod \"7867b2f2-31c0-4ffd-85b3-93fefbc2e565\" (UID: \"7867b2f2-31c0-4ffd-85b3-93fefbc2e565\") " Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.896551 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7867b2f2-31c0-4ffd-85b3-93fefbc2e565-tmp" (OuterVolumeSpecName: "tmp") pod "7867b2f2-31c0-4ffd-85b3-93fefbc2e565" (UID: "7867b2f2-31c0-4ffd-85b3-93fefbc2e565"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.896763 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aacd6b6-1714-4bbd-a541-35251e2f0d77-config\") pod \"route-controller-manager-7cb678599c-sll62\" (UID: \"6aacd6b6-1714-4bbd-a541-35251e2f0d77\") " pod="openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62" Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.896775 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7867b2f2-31c0-4ffd-85b3-93fefbc2e565-client-ca" (OuterVolumeSpecName: "client-ca") pod "7867b2f2-31c0-4ffd-85b3-93fefbc2e565" (UID: "7867b2f2-31c0-4ffd-85b3-93fefbc2e565"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.896859 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6aacd6b6-1714-4bbd-a541-35251e2f0d77-tmp\") pod \"route-controller-manager-7cb678599c-sll62\" (UID: \"6aacd6b6-1714-4bbd-a541-35251e2f0d77\") " pod="openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62" Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.896877 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7867b2f2-31c0-4ffd-85b3-93fefbc2e565-config" (OuterVolumeSpecName: "config") pod "7867b2f2-31c0-4ffd-85b3-93fefbc2e565" (UID: "7867b2f2-31c0-4ffd-85b3-93fefbc2e565"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.897009 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6aacd6b6-1714-4bbd-a541-35251e2f0d77-serving-cert\") pod \"route-controller-manager-7cb678599c-sll62\" (UID: \"6aacd6b6-1714-4bbd-a541-35251e2f0d77\") " pod="openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62" Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.897030 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq4nc\" (UniqueName: \"kubernetes.io/projected/6aacd6b6-1714-4bbd-a541-35251e2f0d77-kube-api-access-kq4nc\") pod \"route-controller-manager-7cb678599c-sll62\" (UID: \"6aacd6b6-1714-4bbd-a541-35251e2f0d77\") " pod="openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62" Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.897090 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6aacd6b6-1714-4bbd-a541-35251e2f0d77-client-ca\") pod \"route-controller-manager-7cb678599c-sll62\" (UID: \"6aacd6b6-1714-4bbd-a541-35251e2f0d77\") " pod="openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62" Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.897132 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7867b2f2-31c0-4ffd-85b3-93fefbc2e565-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.897142 5109 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7867b2f2-31c0-4ffd-85b3-93fefbc2e565-tmp\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.897151 5109 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7867b2f2-31c0-4ffd-85b3-93fefbc2e565-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.901313 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7867b2f2-31c0-4ffd-85b3-93fefbc2e565-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7867b2f2-31c0-4ffd-85b3-93fefbc2e565" (UID: "7867b2f2-31c0-4ffd-85b3-93fefbc2e565"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.901499 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7867b2f2-31c0-4ffd-85b3-93fefbc2e565-kube-api-access-vfsj9" (OuterVolumeSpecName: "kube-api-access-vfsj9") pod "7867b2f2-31c0-4ffd-85b3-93fefbc2e565" (UID: "7867b2f2-31c0-4ffd-85b3-93fefbc2e565"). InnerVolumeSpecName "kube-api-access-vfsj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.990375 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d455c4f7d-8dmrt" Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.998238 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6aacd6b6-1714-4bbd-a541-35251e2f0d77-tmp\") pod \"route-controller-manager-7cb678599c-sll62\" (UID: \"6aacd6b6-1714-4bbd-a541-35251e2f0d77\") " pod="openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62" Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.998329 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6aacd6b6-1714-4bbd-a541-35251e2f0d77-serving-cert\") pod \"route-controller-manager-7cb678599c-sll62\" (UID: \"6aacd6b6-1714-4bbd-a541-35251e2f0d77\") " pod="openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62" Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.998358 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kq4nc\" (UniqueName: \"kubernetes.io/projected/6aacd6b6-1714-4bbd-a541-35251e2f0d77-kube-api-access-kq4nc\") pod \"route-controller-manager-7cb678599c-sll62\" (UID: \"6aacd6b6-1714-4bbd-a541-35251e2f0d77\") " pod="openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62" Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.998418 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6aacd6b6-1714-4bbd-a541-35251e2f0d77-client-ca\") pod \"route-controller-manager-7cb678599c-sll62\" (UID: \"6aacd6b6-1714-4bbd-a541-35251e2f0d77\") " pod="openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62" Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.998460 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aacd6b6-1714-4bbd-a541-35251e2f0d77-config\") pod \"route-controller-manager-7cb678599c-sll62\" (UID: \"6aacd6b6-1714-4bbd-a541-35251e2f0d77\") " pod="openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62" Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.998509 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7867b2f2-31c0-4ffd-85b3-93fefbc2e565-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.998526 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vfsj9\" (UniqueName: \"kubernetes.io/projected/7867b2f2-31c0-4ffd-85b3-93fefbc2e565-kube-api-access-vfsj9\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.999003 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6aacd6b6-1714-4bbd-a541-35251e2f0d77-tmp\") pod \"route-controller-manager-7cb678599c-sll62\" (UID: \"6aacd6b6-1714-4bbd-a541-35251e2f0d77\") " pod="openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62" Feb 17 00:11:49 crc kubenswrapper[5109]: I0217 00:11:49.999889 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aacd6b6-1714-4bbd-a541-35251e2f0d77-config\") pod \"route-controller-manager-7cb678599c-sll62\" (UID: \"6aacd6b6-1714-4bbd-a541-35251e2f0d77\") " pod="openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.000107 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6aacd6b6-1714-4bbd-a541-35251e2f0d77-client-ca\") pod \"route-controller-manager-7cb678599c-sll62\" (UID: \"6aacd6b6-1714-4bbd-a541-35251e2f0d77\") " pod="openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.004574 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6aacd6b6-1714-4bbd-a541-35251e2f0d77-serving-cert\") pod \"route-controller-manager-7cb678599c-sll62\" (UID: \"6aacd6b6-1714-4bbd-a541-35251e2f0d77\") " pod="openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.022141 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-86785997fb-bjdcs"] Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.023059 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5799e5b7-f76e-4e21-8f3c-b7b45a7d949b" containerName="controller-manager" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.023090 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="5799e5b7-f76e-4e21-8f3c-b7b45a7d949b" containerName="controller-manager" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.023295 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="5799e5b7-f76e-4e21-8f3c-b7b45a7d949b" containerName="controller-manager" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.027422 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq4nc\" (UniqueName: \"kubernetes.io/projected/6aacd6b6-1714-4bbd-a541-35251e2f0d77-kube-api-access-kq4nc\") pod \"route-controller-manager-7cb678599c-sll62\" (UID: \"6aacd6b6-1714-4bbd-a541-35251e2f0d77\") " pod="openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.052774 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86785997fb-bjdcs"] Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.053101 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86785997fb-bjdcs" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.099700 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-config\") pod \"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b\" (UID: \"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b\") " Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.099763 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-serving-cert\") pod \"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b\" (UID: \"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b\") " Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.099819 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-client-ca\") pod \"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b\" (UID: \"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b\") " Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.099916 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-proxy-ca-bundles\") pod \"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b\" (UID: \"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b\") " Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.099993 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbgzg\" (UniqueName: \"kubernetes.io/projected/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-kube-api-access-cbgzg\") pod \"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b\" (UID: \"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b\") " Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.100017 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-tmp\") pod \"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b\" (UID: \"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b\") " Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.100511 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-tmp" (OuterVolumeSpecName: "tmp") pod "5799e5b7-f76e-4e21-8f3c-b7b45a7d949b" (UID: "5799e5b7-f76e-4e21-8f3c-b7b45a7d949b"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.101184 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-client-ca" (OuterVolumeSpecName: "client-ca") pod "5799e5b7-f76e-4e21-8f3c-b7b45a7d949b" (UID: "5799e5b7-f76e-4e21-8f3c-b7b45a7d949b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.101193 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5799e5b7-f76e-4e21-8f3c-b7b45a7d949b" (UID: "5799e5b7-f76e-4e21-8f3c-b7b45a7d949b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.101369 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-config" (OuterVolumeSpecName: "config") pod "5799e5b7-f76e-4e21-8f3c-b7b45a7d949b" (UID: "5799e5b7-f76e-4e21-8f3c-b7b45a7d949b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.103126 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5799e5b7-f76e-4e21-8f3c-b7b45a7d949b" (UID: "5799e5b7-f76e-4e21-8f3c-b7b45a7d949b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.105886 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.109794 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-kube-api-access-cbgzg" (OuterVolumeSpecName: "kube-api-access-cbgzg") pod "5799e5b7-f76e-4e21-8f3c-b7b45a7d949b" (UID: "5799e5b7-f76e-4e21-8f3c-b7b45a7d949b"). InnerVolumeSpecName "kube-api-access-cbgzg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.201534 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-proxy-ca-bundles\") pod \"controller-manager-86785997fb-bjdcs\" (UID: \"acbf42a6-2dc4-4597-a223-aa3eebfcf31b\") " pod="openshift-controller-manager/controller-manager-86785997fb-bjdcs" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.201658 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-serving-cert\") pod \"controller-manager-86785997fb-bjdcs\" (UID: \"acbf42a6-2dc4-4597-a223-aa3eebfcf31b\") " pod="openshift-controller-manager/controller-manager-86785997fb-bjdcs" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.201758 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-tmp\") pod \"controller-manager-86785997fb-bjdcs\" (UID: \"acbf42a6-2dc4-4597-a223-aa3eebfcf31b\") " pod="openshift-controller-manager/controller-manager-86785997fb-bjdcs" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.201807 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-client-ca\") pod \"controller-manager-86785997fb-bjdcs\" (UID: \"acbf42a6-2dc4-4597-a223-aa3eebfcf31b\") " pod="openshift-controller-manager/controller-manager-86785997fb-bjdcs" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.201882 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-config\") pod \"controller-manager-86785997fb-bjdcs\" (UID: \"acbf42a6-2dc4-4597-a223-aa3eebfcf31b\") " pod="openshift-controller-manager/controller-manager-86785997fb-bjdcs" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.201951 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t97sl\" (UniqueName: \"kubernetes.io/projected/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-kube-api-access-t97sl\") pod \"controller-manager-86785997fb-bjdcs\" (UID: \"acbf42a6-2dc4-4597-a223-aa3eebfcf31b\") " pod="openshift-controller-manager/controller-manager-86785997fb-bjdcs" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.202094 5109 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.202124 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cbgzg\" (UniqueName: \"kubernetes.io/projected/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-kube-api-access-cbgzg\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.202144 5109 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-tmp\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.202162 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.202178 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.202194 5109 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.302849 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-config\") pod \"controller-manager-86785997fb-bjdcs\" (UID: \"acbf42a6-2dc4-4597-a223-aa3eebfcf31b\") " pod="openshift-controller-manager/controller-manager-86785997fb-bjdcs" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.302900 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t97sl\" (UniqueName: \"kubernetes.io/projected/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-kube-api-access-t97sl\") pod \"controller-manager-86785997fb-bjdcs\" (UID: \"acbf42a6-2dc4-4597-a223-aa3eebfcf31b\") " pod="openshift-controller-manager/controller-manager-86785997fb-bjdcs" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.302993 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-proxy-ca-bundles\") pod \"controller-manager-86785997fb-bjdcs\" (UID: \"acbf42a6-2dc4-4597-a223-aa3eebfcf31b\") " pod="openshift-controller-manager/controller-manager-86785997fb-bjdcs" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.303013 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-serving-cert\") pod \"controller-manager-86785997fb-bjdcs\" (UID: \"acbf42a6-2dc4-4597-a223-aa3eebfcf31b\") " pod="openshift-controller-manager/controller-manager-86785997fb-bjdcs" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.303050 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-tmp\") pod \"controller-manager-86785997fb-bjdcs\" (UID: \"acbf42a6-2dc4-4597-a223-aa3eebfcf31b\") " pod="openshift-controller-manager/controller-manager-86785997fb-bjdcs" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.303067 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-client-ca\") pod \"controller-manager-86785997fb-bjdcs\" (UID: \"acbf42a6-2dc4-4597-a223-aa3eebfcf31b\") " pod="openshift-controller-manager/controller-manager-86785997fb-bjdcs" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.303916 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-client-ca\") pod \"controller-manager-86785997fb-bjdcs\" (UID: \"acbf42a6-2dc4-4597-a223-aa3eebfcf31b\") " pod="openshift-controller-manager/controller-manager-86785997fb-bjdcs" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.304969 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-config\") pod \"controller-manager-86785997fb-bjdcs\" (UID: \"acbf42a6-2dc4-4597-a223-aa3eebfcf31b\") " pod="openshift-controller-manager/controller-manager-86785997fb-bjdcs" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.305754 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-tmp\") pod \"controller-manager-86785997fb-bjdcs\" (UID: \"acbf42a6-2dc4-4597-a223-aa3eebfcf31b\") " pod="openshift-controller-manager/controller-manager-86785997fb-bjdcs" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.307734 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-proxy-ca-bundles\") pod \"controller-manager-86785997fb-bjdcs\" (UID: \"acbf42a6-2dc4-4597-a223-aa3eebfcf31b\") " pod="openshift-controller-manager/controller-manager-86785997fb-bjdcs" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.313944 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-serving-cert\") pod \"controller-manager-86785997fb-bjdcs\" (UID: \"acbf42a6-2dc4-4597-a223-aa3eebfcf31b\") " pod="openshift-controller-manager/controller-manager-86785997fb-bjdcs" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.333217 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t97sl\" (UniqueName: \"kubernetes.io/projected/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-kube-api-access-t97sl\") pod \"controller-manager-86785997fb-bjdcs\" (UID: \"acbf42a6-2dc4-4597-a223-aa3eebfcf31b\") " pod="openshift-controller-manager/controller-manager-86785997fb-bjdcs" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.368258 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62"] Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.378733 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86785997fb-bjdcs" Feb 17 00:11:50 crc kubenswrapper[5109]: W0217 00:11:50.385527 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6aacd6b6_1714_4bbd_a541_35251e2f0d77.slice/crio-820196be67fdd6289901154754cceb4c1f8c4589723b85f910bf033e421d30ca WatchSource:0}: Error finding container 820196be67fdd6289901154754cceb4c1f8c4589723b85f910bf033e421d30ca: Status 404 returned error can't find the container with id 820196be67fdd6289901154754cceb4c1f8c4589723b85f910bf033e421d30ca Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.585060 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d455c4f7d-8dmrt" event={"ID":"5799e5b7-f76e-4e21-8f3c-b7b45a7d949b","Type":"ContainerDied","Data":"2c27e763d205afab0309361d4c704aac4450aae4682bde77e275737504310a2f"} Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.585216 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d455c4f7d-8dmrt" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.585461 5109 scope.go:117] "RemoveContainer" containerID="ec54c3f0f5eeba0a412c73a5c34a716d4df31b64b3aae5cd08e54fdbb31e6029" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.590628 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62" event={"ID":"6aacd6b6-1714-4bbd-a541-35251e2f0d77","Type":"ContainerStarted","Data":"1fa56780d105eedb1b0ceaebef17b4f00aea8983978e2ae0f2f502975e127e16"} Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.590676 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62" event={"ID":"6aacd6b6-1714-4bbd-a541-35251e2f0d77","Type":"ContainerStarted","Data":"820196be67fdd6289901154754cceb4c1f8c4589723b85f910bf033e421d30ca"} Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.590869 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.596091 5109 patch_prober.go:28] interesting pod/route-controller-manager-7cb678599c-sll62 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.596157 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62" podUID="6aacd6b6-1714-4bbd-a541-35251e2f0d77" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.602615 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm" event={"ID":"7867b2f2-31c0-4ffd-85b3-93fefbc2e565","Type":"ContainerDied","Data":"3e16093b22aa18f2b299239339a8883068cb647a1a0d47f0d558616033789cdb"} Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.602966 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.611712 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62" podStartSLOduration=1.6109099169999999 podStartE2EDuration="1.610909917s" podCreationTimestamp="2026-02-17 00:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:11:50.608221796 +0000 UTC m=+181.939776544" watchObservedRunningTime="2026-02-17 00:11:50.610909917 +0000 UTC m=+181.942464675" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.618316 5109 scope.go:117] "RemoveContainer" containerID="dd11a53a7f9f5abe1ad2902125a77d5e44b1af5666ecd595685cdc65c8323a14" Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.636564 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d455c4f7d-8dmrt"] Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.646274 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d455c4f7d-8dmrt"] Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.650588 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm"] Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.653193 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-656fc8d67f-vbvgm"] Feb 17 00:11:50 crc kubenswrapper[5109]: I0217 00:11:50.807421 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86785997fb-bjdcs"] Feb 17 00:11:51 crc kubenswrapper[5109]: I0217 00:11:51.472808 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5799e5b7-f76e-4e21-8f3c-b7b45a7d949b" path="/var/lib/kubelet/pods/5799e5b7-f76e-4e21-8f3c-b7b45a7d949b/volumes" Feb 17 00:11:51 crc kubenswrapper[5109]: I0217 00:11:51.474156 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7867b2f2-31c0-4ffd-85b3-93fefbc2e565" path="/var/lib/kubelet/pods/7867b2f2-31c0-4ffd-85b3-93fefbc2e565/volumes" Feb 17 00:11:51 crc kubenswrapper[5109]: I0217 00:11:51.612563 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86785997fb-bjdcs" event={"ID":"acbf42a6-2dc4-4597-a223-aa3eebfcf31b","Type":"ContainerStarted","Data":"388f726378012bab2b3e02da15afc1d7afcae6ed04257d220a2cd290b8e9f7c0"} Feb 17 00:11:51 crc kubenswrapper[5109]: I0217 00:11:51.612650 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86785997fb-bjdcs" event={"ID":"acbf42a6-2dc4-4597-a223-aa3eebfcf31b","Type":"ContainerStarted","Data":"2f431e5eb4f43d5e2fe76875f259fb036482a1f9126277fb728f1e87bc8a75ff"} Feb 17 00:11:51 crc kubenswrapper[5109]: I0217 00:11:51.614053 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-86785997fb-bjdcs" Feb 17 00:11:51 crc kubenswrapper[5109]: I0217 00:11:51.624277 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62" Feb 17 00:11:51 crc kubenswrapper[5109]: I0217 00:11:51.643665 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-86785997fb-bjdcs" podStartSLOduration=2.643639078 podStartE2EDuration="2.643639078s" podCreationTimestamp="2026-02-17 00:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:11:51.639308144 +0000 UTC m=+182.970862962" watchObservedRunningTime="2026-02-17 00:11:51.643639078 +0000 UTC m=+182.975193866" Feb 17 00:11:51 crc kubenswrapper[5109]: I0217 00:11:51.771682 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-86785997fb-bjdcs" Feb 17 00:12:00 crc kubenswrapper[5109]: I0217 00:12:00.332303 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-c5txm"] Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.202731 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86785997fb-bjdcs"] Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.204001 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-86785997fb-bjdcs" podUID="acbf42a6-2dc4-4597-a223-aa3eebfcf31b" containerName="controller-manager" containerID="cri-o://388f726378012bab2b3e02da15afc1d7afcae6ed04257d220a2cd290b8e9f7c0" gracePeriod=30 Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.221000 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62"] Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.221337 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62" podUID="6aacd6b6-1714-4bbd-a541-35251e2f0d77" containerName="route-controller-manager" containerID="cri-o://1fa56780d105eedb1b0ceaebef17b4f00aea8983978e2ae0f2f502975e127e16" gracePeriod=30 Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.642778 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.675861 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d7df8694-5fm69"] Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.676664 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6aacd6b6-1714-4bbd-a541-35251e2f0d77" containerName="route-controller-manager" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.676690 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aacd6b6-1714-4bbd-a541-35251e2f0d77" containerName="route-controller-manager" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.676837 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="6aacd6b6-1714-4bbd-a541-35251e2f0d77" containerName="route-controller-manager" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.683959 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76d7df8694-5fm69" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.692546 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d7df8694-5fm69"] Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.724109 5109 generic.go:358] "Generic (PLEG): container finished" podID="6aacd6b6-1714-4bbd-a541-35251e2f0d77" containerID="1fa56780d105eedb1b0ceaebef17b4f00aea8983978e2ae0f2f502975e127e16" exitCode=0 Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.724216 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62" event={"ID":"6aacd6b6-1714-4bbd-a541-35251e2f0d77","Type":"ContainerDied","Data":"1fa56780d105eedb1b0ceaebef17b4f00aea8983978e2ae0f2f502975e127e16"} Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.724278 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62" event={"ID":"6aacd6b6-1714-4bbd-a541-35251e2f0d77","Type":"ContainerDied","Data":"820196be67fdd6289901154754cceb4c1f8c4589723b85f910bf033e421d30ca"} Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.724284 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.724306 5109 scope.go:117] "RemoveContainer" containerID="1fa56780d105eedb1b0ceaebef17b4f00aea8983978e2ae0f2f502975e127e16" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.727262 5109 generic.go:358] "Generic (PLEG): container finished" podID="acbf42a6-2dc4-4597-a223-aa3eebfcf31b" containerID="388f726378012bab2b3e02da15afc1d7afcae6ed04257d220a2cd290b8e9f7c0" exitCode=0 Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.727420 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86785997fb-bjdcs" event={"ID":"acbf42a6-2dc4-4597-a223-aa3eebfcf31b","Type":"ContainerDied","Data":"388f726378012bab2b3e02da15afc1d7afcae6ed04257d220a2cd290b8e9f7c0"} Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.743835 5109 scope.go:117] "RemoveContainer" containerID="1fa56780d105eedb1b0ceaebef17b4f00aea8983978e2ae0f2f502975e127e16" Feb 17 00:12:09 crc kubenswrapper[5109]: E0217 00:12:09.744635 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fa56780d105eedb1b0ceaebef17b4f00aea8983978e2ae0f2f502975e127e16\": container with ID starting with 1fa56780d105eedb1b0ceaebef17b4f00aea8983978e2ae0f2f502975e127e16 not found: ID does not exist" containerID="1fa56780d105eedb1b0ceaebef17b4f00aea8983978e2ae0f2f502975e127e16" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.744711 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fa56780d105eedb1b0ceaebef17b4f00aea8983978e2ae0f2f502975e127e16"} err="failed to get container status \"1fa56780d105eedb1b0ceaebef17b4f00aea8983978e2ae0f2f502975e127e16\": rpc error: code = NotFound desc = could not find container \"1fa56780d105eedb1b0ceaebef17b4f00aea8983978e2ae0f2f502975e127e16\": container with ID starting with 1fa56780d105eedb1b0ceaebef17b4f00aea8983978e2ae0f2f502975e127e16 not found: ID does not exist" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.817029 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aacd6b6-1714-4bbd-a541-35251e2f0d77-config\") pod \"6aacd6b6-1714-4bbd-a541-35251e2f0d77\" (UID: \"6aacd6b6-1714-4bbd-a541-35251e2f0d77\") " Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.817082 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6aacd6b6-1714-4bbd-a541-35251e2f0d77-client-ca\") pod \"6aacd6b6-1714-4bbd-a541-35251e2f0d77\" (UID: \"6aacd6b6-1714-4bbd-a541-35251e2f0d77\") " Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.817121 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq4nc\" (UniqueName: \"kubernetes.io/projected/6aacd6b6-1714-4bbd-a541-35251e2f0d77-kube-api-access-kq4nc\") pod \"6aacd6b6-1714-4bbd-a541-35251e2f0d77\" (UID: \"6aacd6b6-1714-4bbd-a541-35251e2f0d77\") " Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.817184 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6aacd6b6-1714-4bbd-a541-35251e2f0d77-serving-cert\") pod \"6aacd6b6-1714-4bbd-a541-35251e2f0d77\" (UID: \"6aacd6b6-1714-4bbd-a541-35251e2f0d77\") " Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.817220 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6aacd6b6-1714-4bbd-a541-35251e2f0d77-tmp\") pod \"6aacd6b6-1714-4bbd-a541-35251e2f0d77\" (UID: \"6aacd6b6-1714-4bbd-a541-35251e2f0d77\") " Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.817381 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c91bd5cb-db44-4594-b47e-eb0b7fdbb17c-config\") pod \"route-controller-manager-76d7df8694-5fm69\" (UID: \"c91bd5cb-db44-4594-b47e-eb0b7fdbb17c\") " pod="openshift-route-controller-manager/route-controller-manager-76d7df8694-5fm69" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.817504 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c91bd5cb-db44-4594-b47e-eb0b7fdbb17c-client-ca\") pod \"route-controller-manager-76d7df8694-5fm69\" (UID: \"c91bd5cb-db44-4594-b47e-eb0b7fdbb17c\") " pod="openshift-route-controller-manager/route-controller-manager-76d7df8694-5fm69" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.817530 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hql2w\" (UniqueName: \"kubernetes.io/projected/c91bd5cb-db44-4594-b47e-eb0b7fdbb17c-kube-api-access-hql2w\") pod \"route-controller-manager-76d7df8694-5fm69\" (UID: \"c91bd5cb-db44-4594-b47e-eb0b7fdbb17c\") " pod="openshift-route-controller-manager/route-controller-manager-76d7df8694-5fm69" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.817635 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c91bd5cb-db44-4594-b47e-eb0b7fdbb17c-serving-cert\") pod \"route-controller-manager-76d7df8694-5fm69\" (UID: \"c91bd5cb-db44-4594-b47e-eb0b7fdbb17c\") " pod="openshift-route-controller-manager/route-controller-manager-76d7df8694-5fm69" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.817696 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c91bd5cb-db44-4594-b47e-eb0b7fdbb17c-tmp\") pod \"route-controller-manager-76d7df8694-5fm69\" (UID: \"c91bd5cb-db44-4594-b47e-eb0b7fdbb17c\") " pod="openshift-route-controller-manager/route-controller-manager-76d7df8694-5fm69" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.818735 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aacd6b6-1714-4bbd-a541-35251e2f0d77-config" (OuterVolumeSpecName: "config") pod "6aacd6b6-1714-4bbd-a541-35251e2f0d77" (UID: "6aacd6b6-1714-4bbd-a541-35251e2f0d77"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.819248 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aacd6b6-1714-4bbd-a541-35251e2f0d77-client-ca" (OuterVolumeSpecName: "client-ca") pod "6aacd6b6-1714-4bbd-a541-35251e2f0d77" (UID: "6aacd6b6-1714-4bbd-a541-35251e2f0d77"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.820305 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aacd6b6-1714-4bbd-a541-35251e2f0d77-tmp" (OuterVolumeSpecName: "tmp") pod "6aacd6b6-1714-4bbd-a541-35251e2f0d77" (UID: "6aacd6b6-1714-4bbd-a541-35251e2f0d77"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.829750 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aacd6b6-1714-4bbd-a541-35251e2f0d77-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6aacd6b6-1714-4bbd-a541-35251e2f0d77" (UID: "6aacd6b6-1714-4bbd-a541-35251e2f0d77"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.839636 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aacd6b6-1714-4bbd-a541-35251e2f0d77-kube-api-access-kq4nc" (OuterVolumeSpecName: "kube-api-access-kq4nc") pod "6aacd6b6-1714-4bbd-a541-35251e2f0d77" (UID: "6aacd6b6-1714-4bbd-a541-35251e2f0d77"). InnerVolumeSpecName "kube-api-access-kq4nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.923577 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c91bd5cb-db44-4594-b47e-eb0b7fdbb17c-tmp\") pod \"route-controller-manager-76d7df8694-5fm69\" (UID: \"c91bd5cb-db44-4594-b47e-eb0b7fdbb17c\") " pod="openshift-route-controller-manager/route-controller-manager-76d7df8694-5fm69" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.923660 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c91bd5cb-db44-4594-b47e-eb0b7fdbb17c-config\") pod \"route-controller-manager-76d7df8694-5fm69\" (UID: \"c91bd5cb-db44-4594-b47e-eb0b7fdbb17c\") " pod="openshift-route-controller-manager/route-controller-manager-76d7df8694-5fm69" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.923692 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c91bd5cb-db44-4594-b47e-eb0b7fdbb17c-client-ca\") pod \"route-controller-manager-76d7df8694-5fm69\" (UID: \"c91bd5cb-db44-4594-b47e-eb0b7fdbb17c\") " pod="openshift-route-controller-manager/route-controller-manager-76d7df8694-5fm69" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.923712 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hql2w\" (UniqueName: \"kubernetes.io/projected/c91bd5cb-db44-4594-b47e-eb0b7fdbb17c-kube-api-access-hql2w\") pod \"route-controller-manager-76d7df8694-5fm69\" (UID: \"c91bd5cb-db44-4594-b47e-eb0b7fdbb17c\") " pod="openshift-route-controller-manager/route-controller-manager-76d7df8694-5fm69" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.923768 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c91bd5cb-db44-4594-b47e-eb0b7fdbb17c-serving-cert\") pod \"route-controller-manager-76d7df8694-5fm69\" (UID: \"c91bd5cb-db44-4594-b47e-eb0b7fdbb17c\") " pod="openshift-route-controller-manager/route-controller-manager-76d7df8694-5fm69" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.923828 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aacd6b6-1714-4bbd-a541-35251e2f0d77-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.923839 5109 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6aacd6b6-1714-4bbd-a541-35251e2f0d77-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.923849 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kq4nc\" (UniqueName: \"kubernetes.io/projected/6aacd6b6-1714-4bbd-a541-35251e2f0d77-kube-api-access-kq4nc\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.923861 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6aacd6b6-1714-4bbd-a541-35251e2f0d77-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.923871 5109 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6aacd6b6-1714-4bbd-a541-35251e2f0d77-tmp\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.925332 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c91bd5cb-db44-4594-b47e-eb0b7fdbb17c-tmp\") pod \"route-controller-manager-76d7df8694-5fm69\" (UID: \"c91bd5cb-db44-4594-b47e-eb0b7fdbb17c\") " pod="openshift-route-controller-manager/route-controller-manager-76d7df8694-5fm69" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.925530 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c91bd5cb-db44-4594-b47e-eb0b7fdbb17c-client-ca\") pod \"route-controller-manager-76d7df8694-5fm69\" (UID: \"c91bd5cb-db44-4594-b47e-eb0b7fdbb17c\") " pod="openshift-route-controller-manager/route-controller-manager-76d7df8694-5fm69" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.925958 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c91bd5cb-db44-4594-b47e-eb0b7fdbb17c-config\") pod \"route-controller-manager-76d7df8694-5fm69\" (UID: \"c91bd5cb-db44-4594-b47e-eb0b7fdbb17c\") " pod="openshift-route-controller-manager/route-controller-manager-76d7df8694-5fm69" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.930493 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c91bd5cb-db44-4594-b47e-eb0b7fdbb17c-serving-cert\") pod \"route-controller-manager-76d7df8694-5fm69\" (UID: \"c91bd5cb-db44-4594-b47e-eb0b7fdbb17c\") " pod="openshift-route-controller-manager/route-controller-manager-76d7df8694-5fm69" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.954675 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hql2w\" (UniqueName: \"kubernetes.io/projected/c91bd5cb-db44-4594-b47e-eb0b7fdbb17c-kube-api-access-hql2w\") pod \"route-controller-manager-76d7df8694-5fm69\" (UID: \"c91bd5cb-db44-4594-b47e-eb0b7fdbb17c\") " pod="openshift-route-controller-manager/route-controller-manager-76d7df8694-5fm69" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.958734 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86785997fb-bjdcs" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.985781 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d65f859f8-4v9n5"] Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.986478 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="acbf42a6-2dc4-4597-a223-aa3eebfcf31b" containerName="controller-manager" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.986500 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="acbf42a6-2dc4-4597-a223-aa3eebfcf31b" containerName="controller-manager" Feb 17 00:12:09 crc kubenswrapper[5109]: I0217 00:12:09.986659 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="acbf42a6-2dc4-4597-a223-aa3eebfcf31b" containerName="controller-manager" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.004241 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d65f859f8-4v9n5"] Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.004458 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76d7df8694-5fm69" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.004475 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d65f859f8-4v9n5" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.024759 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54229c06-4b3d-4837-b87e-4276d8a51a1d-config\") pod \"controller-manager-d65f859f8-4v9n5\" (UID: \"54229c06-4b3d-4837-b87e-4276d8a51a1d\") " pod="openshift-controller-manager/controller-manager-d65f859f8-4v9n5" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.024826 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54229c06-4b3d-4837-b87e-4276d8a51a1d-client-ca\") pod \"controller-manager-d65f859f8-4v9n5\" (UID: \"54229c06-4b3d-4837-b87e-4276d8a51a1d\") " pod="openshift-controller-manager/controller-manager-d65f859f8-4v9n5" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.025152 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54229c06-4b3d-4837-b87e-4276d8a51a1d-serving-cert\") pod \"controller-manager-d65f859f8-4v9n5\" (UID: \"54229c06-4b3d-4837-b87e-4276d8a51a1d\") " pod="openshift-controller-manager/controller-manager-d65f859f8-4v9n5" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.025236 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/54229c06-4b3d-4837-b87e-4276d8a51a1d-tmp\") pod \"controller-manager-d65f859f8-4v9n5\" (UID: \"54229c06-4b3d-4837-b87e-4276d8a51a1d\") " pod="openshift-controller-manager/controller-manager-d65f859f8-4v9n5" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.025271 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfn4b\" (UniqueName: \"kubernetes.io/projected/54229c06-4b3d-4837-b87e-4276d8a51a1d-kube-api-access-lfn4b\") pod \"controller-manager-d65f859f8-4v9n5\" (UID: \"54229c06-4b3d-4837-b87e-4276d8a51a1d\") " pod="openshift-controller-manager/controller-manager-d65f859f8-4v9n5" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.025421 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54229c06-4b3d-4837-b87e-4276d8a51a1d-proxy-ca-bundles\") pod \"controller-manager-d65f859f8-4v9n5\" (UID: \"54229c06-4b3d-4837-b87e-4276d8a51a1d\") " pod="openshift-controller-manager/controller-manager-d65f859f8-4v9n5" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.062269 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62"] Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.081433 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb678599c-sll62"] Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.126198 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-proxy-ca-bundles\") pod \"acbf42a6-2dc4-4597-a223-aa3eebfcf31b\" (UID: \"acbf42a6-2dc4-4597-a223-aa3eebfcf31b\") " Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.126303 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t97sl\" (UniqueName: \"kubernetes.io/projected/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-kube-api-access-t97sl\") pod \"acbf42a6-2dc4-4597-a223-aa3eebfcf31b\" (UID: \"acbf42a6-2dc4-4597-a223-aa3eebfcf31b\") " Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.126351 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-config\") pod \"acbf42a6-2dc4-4597-a223-aa3eebfcf31b\" (UID: \"acbf42a6-2dc4-4597-a223-aa3eebfcf31b\") " Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.126431 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-tmp\") pod \"acbf42a6-2dc4-4597-a223-aa3eebfcf31b\" (UID: \"acbf42a6-2dc4-4597-a223-aa3eebfcf31b\") " Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.126468 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-client-ca\") pod \"acbf42a6-2dc4-4597-a223-aa3eebfcf31b\" (UID: \"acbf42a6-2dc4-4597-a223-aa3eebfcf31b\") " Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.126484 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-serving-cert\") pod \"acbf42a6-2dc4-4597-a223-aa3eebfcf31b\" (UID: \"acbf42a6-2dc4-4597-a223-aa3eebfcf31b\") " Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.126561 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54229c06-4b3d-4837-b87e-4276d8a51a1d-config\") pod \"controller-manager-d65f859f8-4v9n5\" (UID: \"54229c06-4b3d-4837-b87e-4276d8a51a1d\") " pod="openshift-controller-manager/controller-manager-d65f859f8-4v9n5" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.126580 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54229c06-4b3d-4837-b87e-4276d8a51a1d-client-ca\") pod \"controller-manager-d65f859f8-4v9n5\" (UID: \"54229c06-4b3d-4837-b87e-4276d8a51a1d\") " pod="openshift-controller-manager/controller-manager-d65f859f8-4v9n5" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.126657 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54229c06-4b3d-4837-b87e-4276d8a51a1d-serving-cert\") pod \"controller-manager-d65f859f8-4v9n5\" (UID: \"54229c06-4b3d-4837-b87e-4276d8a51a1d\") " pod="openshift-controller-manager/controller-manager-d65f859f8-4v9n5" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.126680 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/54229c06-4b3d-4837-b87e-4276d8a51a1d-tmp\") pod \"controller-manager-d65f859f8-4v9n5\" (UID: \"54229c06-4b3d-4837-b87e-4276d8a51a1d\") " pod="openshift-controller-manager/controller-manager-d65f859f8-4v9n5" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.126697 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lfn4b\" (UniqueName: \"kubernetes.io/projected/54229c06-4b3d-4837-b87e-4276d8a51a1d-kube-api-access-lfn4b\") pod \"controller-manager-d65f859f8-4v9n5\" (UID: \"54229c06-4b3d-4837-b87e-4276d8a51a1d\") " pod="openshift-controller-manager/controller-manager-d65f859f8-4v9n5" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.126747 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54229c06-4b3d-4837-b87e-4276d8a51a1d-proxy-ca-bundles\") pod \"controller-manager-d65f859f8-4v9n5\" (UID: \"54229c06-4b3d-4837-b87e-4276d8a51a1d\") " pod="openshift-controller-manager/controller-manager-d65f859f8-4v9n5" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.127860 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54229c06-4b3d-4837-b87e-4276d8a51a1d-proxy-ca-bundles\") pod \"controller-manager-d65f859f8-4v9n5\" (UID: \"54229c06-4b3d-4837-b87e-4276d8a51a1d\") " pod="openshift-controller-manager/controller-manager-d65f859f8-4v9n5" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.128457 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "acbf42a6-2dc4-4597-a223-aa3eebfcf31b" (UID: "acbf42a6-2dc4-4597-a223-aa3eebfcf31b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.130474 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54229c06-4b3d-4837-b87e-4276d8a51a1d-config\") pod \"controller-manager-d65f859f8-4v9n5\" (UID: \"54229c06-4b3d-4837-b87e-4276d8a51a1d\") " pod="openshift-controller-manager/controller-manager-d65f859f8-4v9n5" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.131181 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-config" (OuterVolumeSpecName: "config") pod "acbf42a6-2dc4-4597-a223-aa3eebfcf31b" (UID: "acbf42a6-2dc4-4597-a223-aa3eebfcf31b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.131550 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-tmp" (OuterVolumeSpecName: "tmp") pod "acbf42a6-2dc4-4597-a223-aa3eebfcf31b" (UID: "acbf42a6-2dc4-4597-a223-aa3eebfcf31b"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.131840 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-client-ca" (OuterVolumeSpecName: "client-ca") pod "acbf42a6-2dc4-4597-a223-aa3eebfcf31b" (UID: "acbf42a6-2dc4-4597-a223-aa3eebfcf31b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.145421 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/54229c06-4b3d-4837-b87e-4276d8a51a1d-tmp\") pod \"controller-manager-d65f859f8-4v9n5\" (UID: \"54229c06-4b3d-4837-b87e-4276d8a51a1d\") " pod="openshift-controller-manager/controller-manager-d65f859f8-4v9n5" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.145657 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54229c06-4b3d-4837-b87e-4276d8a51a1d-client-ca\") pod \"controller-manager-d65f859f8-4v9n5\" (UID: \"54229c06-4b3d-4837-b87e-4276d8a51a1d\") " pod="openshift-controller-manager/controller-manager-d65f859f8-4v9n5" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.149293 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54229c06-4b3d-4837-b87e-4276d8a51a1d-serving-cert\") pod \"controller-manager-d65f859f8-4v9n5\" (UID: \"54229c06-4b3d-4837-b87e-4276d8a51a1d\") " pod="openshift-controller-manager/controller-manager-d65f859f8-4v9n5" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.161919 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfn4b\" (UniqueName: \"kubernetes.io/projected/54229c06-4b3d-4837-b87e-4276d8a51a1d-kube-api-access-lfn4b\") pod \"controller-manager-d65f859f8-4v9n5\" (UID: \"54229c06-4b3d-4837-b87e-4276d8a51a1d\") " pod="openshift-controller-manager/controller-manager-d65f859f8-4v9n5" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.167641 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-kube-api-access-t97sl" (OuterVolumeSpecName: "kube-api-access-t97sl") pod "acbf42a6-2dc4-4597-a223-aa3eebfcf31b" (UID: "acbf42a6-2dc4-4597-a223-aa3eebfcf31b"). InnerVolumeSpecName "kube-api-access-t97sl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.167777 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "acbf42a6-2dc4-4597-a223-aa3eebfcf31b" (UID: "acbf42a6-2dc4-4597-a223-aa3eebfcf31b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.227896 5109 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-tmp\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.227931 5109 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.227940 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.227951 5109 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.227960 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t97sl\" (UniqueName: \"kubernetes.io/projected/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-kube-api-access-t97sl\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.227968 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acbf42a6-2dc4-4597-a223-aa3eebfcf31b-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.323247 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d65f859f8-4v9n5" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.500468 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d7df8694-5fm69"] Feb 17 00:12:10 crc kubenswrapper[5109]: W0217 00:12:10.512852 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc91bd5cb_db44_4594_b47e_eb0b7fdbb17c.slice/crio-16226e282af571573c606d595334434e9916b3a3ba1dd8fb176423e6645acfaa WatchSource:0}: Error finding container 16226e282af571573c606d595334434e9916b3a3ba1dd8fb176423e6645acfaa: Status 404 returned error can't find the container with id 16226e282af571573c606d595334434e9916b3a3ba1dd8fb176423e6645acfaa Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.736169 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86785997fb-bjdcs" event={"ID":"acbf42a6-2dc4-4597-a223-aa3eebfcf31b","Type":"ContainerDied","Data":"2f431e5eb4f43d5e2fe76875f259fb036482a1f9126277fb728f1e87bc8a75ff"} Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.736233 5109 scope.go:117] "RemoveContainer" containerID="388f726378012bab2b3e02da15afc1d7afcae6ed04257d220a2cd290b8e9f7c0" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.736228 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86785997fb-bjdcs" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.739353 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76d7df8694-5fm69" event={"ID":"c91bd5cb-db44-4594-b47e-eb0b7fdbb17c","Type":"ContainerStarted","Data":"5636a70f8967833f02701d7d2ebff85c357b81a9ef01853f1083f8c56c7146d0"} Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.739432 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76d7df8694-5fm69" event={"ID":"c91bd5cb-db44-4594-b47e-eb0b7fdbb17c","Type":"ContainerStarted","Data":"16226e282af571573c606d595334434e9916b3a3ba1dd8fb176423e6645acfaa"} Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.739944 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-76d7df8694-5fm69" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.766119 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-76d7df8694-5fm69" podStartSLOduration=1.7660837219999999 podStartE2EDuration="1.766083722s" podCreationTimestamp="2026-02-17 00:12:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:12:10.760761313 +0000 UTC m=+202.092316071" watchObservedRunningTime="2026-02-17 00:12:10.766083722 +0000 UTC m=+202.097638510" Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.779453 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86785997fb-bjdcs"] Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.784414 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-86785997fb-bjdcs"] Feb 17 00:12:10 crc kubenswrapper[5109]: I0217 00:12:10.798053 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d65f859f8-4v9n5"] Feb 17 00:12:11 crc kubenswrapper[5109]: I0217 00:12:11.478693 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aacd6b6-1714-4bbd-a541-35251e2f0d77" path="/var/lib/kubelet/pods/6aacd6b6-1714-4bbd-a541-35251e2f0d77/volumes" Feb 17 00:12:11 crc kubenswrapper[5109]: I0217 00:12:11.481508 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acbf42a6-2dc4-4597-a223-aa3eebfcf31b" path="/var/lib/kubelet/pods/acbf42a6-2dc4-4597-a223-aa3eebfcf31b/volumes" Feb 17 00:12:11 crc kubenswrapper[5109]: I0217 00:12:11.740458 5109 patch_prober.go:28] interesting pod/route-controller-manager-76d7df8694-5fm69 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": context deadline exceeded" start-of-body= Feb 17 00:12:11 crc kubenswrapper[5109]: I0217 00:12:11.740580 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-76d7df8694-5fm69" podUID="c91bd5cb-db44-4594-b47e-eb0b7fdbb17c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": context deadline exceeded" Feb 17 00:12:11 crc kubenswrapper[5109]: I0217 00:12:11.747512 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d65f859f8-4v9n5" event={"ID":"54229c06-4b3d-4837-b87e-4276d8a51a1d","Type":"ContainerStarted","Data":"2203b2be0f91e0d6a4f244996847660c7c419ec37f1fde8d8a97bd9dc9ae958f"} Feb 17 00:12:11 crc kubenswrapper[5109]: I0217 00:12:11.747631 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-d65f859f8-4v9n5" Feb 17 00:12:11 crc kubenswrapper[5109]: I0217 00:12:11.747657 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d65f859f8-4v9n5" event={"ID":"54229c06-4b3d-4837-b87e-4276d8a51a1d","Type":"ContainerStarted","Data":"971772f2cdb809500ae433fcd2e708acb77bc75a56cf7c94c29e2c5d22d0223c"} Feb 17 00:12:11 crc kubenswrapper[5109]: I0217 00:12:11.760281 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d65f859f8-4v9n5" Feb 17 00:12:11 crc kubenswrapper[5109]: I0217 00:12:11.802800 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d65f859f8-4v9n5" podStartSLOduration=2.802776368 podStartE2EDuration="2.802776368s" podCreationTimestamp="2026-02-17 00:12:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:12:11.77742765 +0000 UTC m=+203.108982418" watchObservedRunningTime="2026-02-17 00:12:11.802776368 +0000 UTC m=+203.134331156" Feb 17 00:12:11 crc kubenswrapper[5109]: I0217 00:12:11.843813 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-76d7df8694-5fm69" Feb 17 00:12:13 crc kubenswrapper[5109]: I0217 00:12:13.853121 5109 ???:1] "http: TLS handshake error from 192.168.126.11:36442: no serving certificate available for the kubelet" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.660699 5109 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.666439 5109 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.666495 5109 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.666870 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.667720 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://a0ea5c7fda47fd144d19a40ef150d8139d28fa518cc105121ef56acaa6a89144" gracePeriod=15 Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.667743 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://01e8438dd967be87a517b2247647b63d0b0a0ab4809a38166ccd08472ce01674" gracePeriod=15 Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.667919 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://af55cbbead47c368e6b0f722919904deb3d9d48d3adce23a62ad46c724d958d7" gracePeriod=15 Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.667965 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.668011 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://6da90479a27e9f43b61bf6fbe3714fcc9ba9c95fef7ebdd87c26d235bcfe52db" gracePeriod=15 Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.668044 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://d2a3c7b462f74b168b5fe77dc689e662f7b9e5e31f3a142f5b5d291d787b7a30" gracePeriod=15 Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.668020 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.668118 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.668136 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.668188 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.668202 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.668222 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.668237 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.668260 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.668272 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.668308 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.668321 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.668357 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.668369 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.668390 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.668402 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.668670 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.668695 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.668716 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.668737 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.668751 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.668771 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.668787 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.668805 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.669027 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.669047 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.669461 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.669726 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.669744 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.672642 5109 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.693381 5109 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.717044 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.745947 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.746013 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.746042 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.746068 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.746092 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.746145 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.746163 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.746190 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.746351 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.746388 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.847815 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.847879 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.847924 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.847966 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.848042 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.848073 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.848110 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.848162 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.848211 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.848288 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.848398 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.848462 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.848784 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.848815 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.848787 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.848860 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.848890 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.848901 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.849204 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:12:17 crc kubenswrapper[5109]: I0217 00:12:17.849212 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:12:18 crc kubenswrapper[5109]: I0217 00:12:18.013000 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:12:18 crc kubenswrapper[5109]: E0217 00:12:18.034278 5109 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.199:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894e04471b86c0b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:12:18.033757195 +0000 UTC m=+209.365311953,LastTimestamp:2026-02-17 00:12:18.033757195 +0000 UTC m=+209.365311953,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:12:18 crc kubenswrapper[5109]: I0217 00:12:18.822365 5109 generic.go:358] "Generic (PLEG): container finished" podID="30062c3d-5b20-47fc-afb7-8c62131e36ac" containerID="8a1e4248b7cabc0481354c0d510e0d363fcb5d1b78eed518703c75338290ce0d" exitCode=0 Feb 17 00:12:18 crc kubenswrapper[5109]: I0217 00:12:18.822522 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"30062c3d-5b20-47fc-afb7-8c62131e36ac","Type":"ContainerDied","Data":"8a1e4248b7cabc0481354c0d510e0d363fcb5d1b78eed518703c75338290ce0d"} Feb 17 00:12:18 crc kubenswrapper[5109]: I0217 00:12:18.824080 5109 status_manager.go:895] "Failed to get status for pod" podUID="30062c3d-5b20-47fc-afb7-8c62131e36ac" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:18 crc kubenswrapper[5109]: I0217 00:12:18.824493 5109 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:18 crc kubenswrapper[5109]: I0217 00:12:18.826173 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 17 00:12:18 crc kubenswrapper[5109]: I0217 00:12:18.828148 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 17 00:12:18 crc kubenswrapper[5109]: I0217 00:12:18.829096 5109 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="01e8438dd967be87a517b2247647b63d0b0a0ab4809a38166ccd08472ce01674" exitCode=0 Feb 17 00:12:18 crc kubenswrapper[5109]: I0217 00:12:18.829200 5109 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="af55cbbead47c368e6b0f722919904deb3d9d48d3adce23a62ad46c724d958d7" exitCode=0 Feb 17 00:12:18 crc kubenswrapper[5109]: I0217 00:12:18.829163 5109 scope.go:117] "RemoveContainer" containerID="2714abd11f9fcc5bf7a3130bf396a8407088c58ac080791afb94faa2aeb1d8ef" Feb 17 00:12:18 crc kubenswrapper[5109]: I0217 00:12:18.829258 5109 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="6da90479a27e9f43b61bf6fbe3714fcc9ba9c95fef7ebdd87c26d235bcfe52db" exitCode=0 Feb 17 00:12:18 crc kubenswrapper[5109]: I0217 00:12:18.829279 5109 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="d2a3c7b462f74b168b5fe77dc689e662f7b9e5e31f3a142f5b5d291d787b7a30" exitCode=2 Feb 17 00:12:18 crc kubenswrapper[5109]: I0217 00:12:18.832240 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"6e102103b887cf54ee5a49259523e90efc2e670969b48f9cfd49a921f490d087"} Feb 17 00:12:18 crc kubenswrapper[5109]: I0217 00:12:18.832288 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"87fa89f860d5941c129d5ff2fb3e820ca40edb763e263290d189cde05694b1c1"} Feb 17 00:12:18 crc kubenswrapper[5109]: I0217 00:12:18.833222 5109 status_manager.go:895] "Failed to get status for pod" podUID="30062c3d-5b20-47fc-afb7-8c62131e36ac" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:18 crc kubenswrapper[5109]: I0217 00:12:18.833923 5109 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:19 crc kubenswrapper[5109]: I0217 00:12:19.469924 5109 status_manager.go:895] "Failed to get status for pod" podUID="30062c3d-5b20-47fc-afb7-8c62131e36ac" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:19 crc kubenswrapper[5109]: I0217 00:12:19.470548 5109 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:19 crc kubenswrapper[5109]: I0217 00:12:19.865699 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.186161 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.187356 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.188366 5109 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.188974 5109 status_manager.go:895] "Failed to get status for pod" podUID="30062c3d-5b20-47fc-afb7-8c62131e36ac" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.189369 5109 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.281056 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.281648 5109 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.281923 5109 status_manager.go:895] "Failed to get status for pod" podUID="30062c3d-5b20-47fc-afb7-8c62131e36ac" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.282268 5109 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.313435 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.313906 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.314010 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.314040 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.314113 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.314229 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.314345 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.314550 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.314715 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.314832 5109 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.314874 5109 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.314893 5109 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.317900 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.415898 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30062c3d-5b20-47fc-afb7-8c62131e36ac-kube-api-access\") pod \"30062c3d-5b20-47fc-afb7-8c62131e36ac\" (UID: \"30062c3d-5b20-47fc-afb7-8c62131e36ac\") " Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.416016 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/30062c3d-5b20-47fc-afb7-8c62131e36ac-kubelet-dir\") pod \"30062c3d-5b20-47fc-afb7-8c62131e36ac\" (UID: \"30062c3d-5b20-47fc-afb7-8c62131e36ac\") " Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.416071 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/30062c3d-5b20-47fc-afb7-8c62131e36ac-var-lock\") pod \"30062c3d-5b20-47fc-afb7-8c62131e36ac\" (UID: \"30062c3d-5b20-47fc-afb7-8c62131e36ac\") " Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.416547 5109 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.416571 5109 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.416652 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30062c3d-5b20-47fc-afb7-8c62131e36ac-var-lock" (OuterVolumeSpecName: "var-lock") pod "30062c3d-5b20-47fc-afb7-8c62131e36ac" (UID: "30062c3d-5b20-47fc-afb7-8c62131e36ac"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.416687 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30062c3d-5b20-47fc-afb7-8c62131e36ac-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "30062c3d-5b20-47fc-afb7-8c62131e36ac" (UID: "30062c3d-5b20-47fc-afb7-8c62131e36ac"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.424957 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30062c3d-5b20-47fc-afb7-8c62131e36ac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "30062c3d-5b20-47fc-afb7-8c62131e36ac" (UID: "30062c3d-5b20-47fc-afb7-8c62131e36ac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.518109 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30062c3d-5b20-47fc-afb7-8c62131e36ac-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.518485 5109 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/30062c3d-5b20-47fc-afb7-8c62131e36ac-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.518701 5109 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/30062c3d-5b20-47fc-afb7-8c62131e36ac-var-lock\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.879599 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.881519 5109 scope.go:117] "RemoveContainer" containerID="01e8438dd967be87a517b2247647b63d0b0a0ab4809a38166ccd08472ce01674" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.881548 5109 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="a0ea5c7fda47fd144d19a40ef150d8139d28fa518cc105121ef56acaa6a89144" exitCode=0 Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.881642 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.885226 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.885337 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"30062c3d-5b20-47fc-afb7-8c62131e36ac","Type":"ContainerDied","Data":"1d8d7fe706871f68545c24cd2659290b0885d973f463e4b2871875048f1b3a96"} Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.885451 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d8d7fe706871f68545c24cd2659290b0885d973f463e4b2871875048f1b3a96" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.905366 5109 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.905910 5109 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.906144 5109 status_manager.go:895] "Failed to get status for pod" podUID="30062c3d-5b20-47fc-afb7-8c62131e36ac" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.909874 5109 scope.go:117] "RemoveContainer" containerID="af55cbbead47c368e6b0f722919904deb3d9d48d3adce23a62ad46c724d958d7" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.911416 5109 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.911858 5109 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.912295 5109 status_manager.go:895] "Failed to get status for pod" podUID="30062c3d-5b20-47fc-afb7-8c62131e36ac" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.925332 5109 scope.go:117] "RemoveContainer" containerID="6da90479a27e9f43b61bf6fbe3714fcc9ba9c95fef7ebdd87c26d235bcfe52db" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.939878 5109 scope.go:117] "RemoveContainer" containerID="d2a3c7b462f74b168b5fe77dc689e662f7b9e5e31f3a142f5b5d291d787b7a30" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.954528 5109 scope.go:117] "RemoveContainer" containerID="a0ea5c7fda47fd144d19a40ef150d8139d28fa518cc105121ef56acaa6a89144" Feb 17 00:12:20 crc kubenswrapper[5109]: I0217 00:12:20.976266 5109 scope.go:117] "RemoveContainer" containerID="ca21493ca46d81a601c2764ee7fb2c01a69162decccae8714c7a03d21c681335" Feb 17 00:12:21 crc kubenswrapper[5109]: I0217 00:12:21.033078 5109 scope.go:117] "RemoveContainer" containerID="01e8438dd967be87a517b2247647b63d0b0a0ab4809a38166ccd08472ce01674" Feb 17 00:12:21 crc kubenswrapper[5109]: E0217 00:12:21.033540 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01e8438dd967be87a517b2247647b63d0b0a0ab4809a38166ccd08472ce01674\": container with ID starting with 01e8438dd967be87a517b2247647b63d0b0a0ab4809a38166ccd08472ce01674 not found: ID does not exist" containerID="01e8438dd967be87a517b2247647b63d0b0a0ab4809a38166ccd08472ce01674" Feb 17 00:12:21 crc kubenswrapper[5109]: I0217 00:12:21.033630 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01e8438dd967be87a517b2247647b63d0b0a0ab4809a38166ccd08472ce01674"} err="failed to get container status \"01e8438dd967be87a517b2247647b63d0b0a0ab4809a38166ccd08472ce01674\": rpc error: code = NotFound desc = could not find container \"01e8438dd967be87a517b2247647b63d0b0a0ab4809a38166ccd08472ce01674\": container with ID starting with 01e8438dd967be87a517b2247647b63d0b0a0ab4809a38166ccd08472ce01674 not found: ID does not exist" Feb 17 00:12:21 crc kubenswrapper[5109]: I0217 00:12:21.033680 5109 scope.go:117] "RemoveContainer" containerID="af55cbbead47c368e6b0f722919904deb3d9d48d3adce23a62ad46c724d958d7" Feb 17 00:12:21 crc kubenswrapper[5109]: E0217 00:12:21.034170 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af55cbbead47c368e6b0f722919904deb3d9d48d3adce23a62ad46c724d958d7\": container with ID starting with af55cbbead47c368e6b0f722919904deb3d9d48d3adce23a62ad46c724d958d7 not found: ID does not exist" containerID="af55cbbead47c368e6b0f722919904deb3d9d48d3adce23a62ad46c724d958d7" Feb 17 00:12:21 crc kubenswrapper[5109]: I0217 00:12:21.034208 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af55cbbead47c368e6b0f722919904deb3d9d48d3adce23a62ad46c724d958d7"} err="failed to get container status \"af55cbbead47c368e6b0f722919904deb3d9d48d3adce23a62ad46c724d958d7\": rpc error: code = NotFound desc = could not find container \"af55cbbead47c368e6b0f722919904deb3d9d48d3adce23a62ad46c724d958d7\": container with ID starting with af55cbbead47c368e6b0f722919904deb3d9d48d3adce23a62ad46c724d958d7 not found: ID does not exist" Feb 17 00:12:21 crc kubenswrapper[5109]: I0217 00:12:21.034229 5109 scope.go:117] "RemoveContainer" containerID="6da90479a27e9f43b61bf6fbe3714fcc9ba9c95fef7ebdd87c26d235bcfe52db" Feb 17 00:12:21 crc kubenswrapper[5109]: E0217 00:12:21.034887 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6da90479a27e9f43b61bf6fbe3714fcc9ba9c95fef7ebdd87c26d235bcfe52db\": container with ID starting with 6da90479a27e9f43b61bf6fbe3714fcc9ba9c95fef7ebdd87c26d235bcfe52db not found: ID does not exist" containerID="6da90479a27e9f43b61bf6fbe3714fcc9ba9c95fef7ebdd87c26d235bcfe52db" Feb 17 00:12:21 crc kubenswrapper[5109]: I0217 00:12:21.034952 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6da90479a27e9f43b61bf6fbe3714fcc9ba9c95fef7ebdd87c26d235bcfe52db"} err="failed to get container status \"6da90479a27e9f43b61bf6fbe3714fcc9ba9c95fef7ebdd87c26d235bcfe52db\": rpc error: code = NotFound desc = could not find container \"6da90479a27e9f43b61bf6fbe3714fcc9ba9c95fef7ebdd87c26d235bcfe52db\": container with ID starting with 6da90479a27e9f43b61bf6fbe3714fcc9ba9c95fef7ebdd87c26d235bcfe52db not found: ID does not exist" Feb 17 00:12:21 crc kubenswrapper[5109]: I0217 00:12:21.034998 5109 scope.go:117] "RemoveContainer" containerID="d2a3c7b462f74b168b5fe77dc689e662f7b9e5e31f3a142f5b5d291d787b7a30" Feb 17 00:12:21 crc kubenswrapper[5109]: E0217 00:12:21.035478 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2a3c7b462f74b168b5fe77dc689e662f7b9e5e31f3a142f5b5d291d787b7a30\": container with ID starting with d2a3c7b462f74b168b5fe77dc689e662f7b9e5e31f3a142f5b5d291d787b7a30 not found: ID does not exist" containerID="d2a3c7b462f74b168b5fe77dc689e662f7b9e5e31f3a142f5b5d291d787b7a30" Feb 17 00:12:21 crc kubenswrapper[5109]: I0217 00:12:21.035507 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2a3c7b462f74b168b5fe77dc689e662f7b9e5e31f3a142f5b5d291d787b7a30"} err="failed to get container status \"d2a3c7b462f74b168b5fe77dc689e662f7b9e5e31f3a142f5b5d291d787b7a30\": rpc error: code = NotFound desc = could not find container \"d2a3c7b462f74b168b5fe77dc689e662f7b9e5e31f3a142f5b5d291d787b7a30\": container with ID starting with d2a3c7b462f74b168b5fe77dc689e662f7b9e5e31f3a142f5b5d291d787b7a30 not found: ID does not exist" Feb 17 00:12:21 crc kubenswrapper[5109]: I0217 00:12:21.035546 5109 scope.go:117] "RemoveContainer" containerID="a0ea5c7fda47fd144d19a40ef150d8139d28fa518cc105121ef56acaa6a89144" Feb 17 00:12:21 crc kubenswrapper[5109]: E0217 00:12:21.035979 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0ea5c7fda47fd144d19a40ef150d8139d28fa518cc105121ef56acaa6a89144\": container with ID starting with a0ea5c7fda47fd144d19a40ef150d8139d28fa518cc105121ef56acaa6a89144 not found: ID does not exist" containerID="a0ea5c7fda47fd144d19a40ef150d8139d28fa518cc105121ef56acaa6a89144" Feb 17 00:12:21 crc kubenswrapper[5109]: I0217 00:12:21.036000 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0ea5c7fda47fd144d19a40ef150d8139d28fa518cc105121ef56acaa6a89144"} err="failed to get container status \"a0ea5c7fda47fd144d19a40ef150d8139d28fa518cc105121ef56acaa6a89144\": rpc error: code = NotFound desc = could not find container \"a0ea5c7fda47fd144d19a40ef150d8139d28fa518cc105121ef56acaa6a89144\": container with ID starting with a0ea5c7fda47fd144d19a40ef150d8139d28fa518cc105121ef56acaa6a89144 not found: ID does not exist" Feb 17 00:12:21 crc kubenswrapper[5109]: I0217 00:12:21.036015 5109 scope.go:117] "RemoveContainer" containerID="ca21493ca46d81a601c2764ee7fb2c01a69162decccae8714c7a03d21c681335" Feb 17 00:12:21 crc kubenswrapper[5109]: E0217 00:12:21.036267 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca21493ca46d81a601c2764ee7fb2c01a69162decccae8714c7a03d21c681335\": container with ID starting with ca21493ca46d81a601c2764ee7fb2c01a69162decccae8714c7a03d21c681335 not found: ID does not exist" containerID="ca21493ca46d81a601c2764ee7fb2c01a69162decccae8714c7a03d21c681335" Feb 17 00:12:21 crc kubenswrapper[5109]: I0217 00:12:21.036323 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca21493ca46d81a601c2764ee7fb2c01a69162decccae8714c7a03d21c681335"} err="failed to get container status \"ca21493ca46d81a601c2764ee7fb2c01a69162decccae8714c7a03d21c681335\": rpc error: code = NotFound desc = could not find container \"ca21493ca46d81a601c2764ee7fb2c01a69162decccae8714c7a03d21c681335\": container with ID starting with ca21493ca46d81a601c2764ee7fb2c01a69162decccae8714c7a03d21c681335 not found: ID does not exist" Feb 17 00:12:21 crc kubenswrapper[5109]: I0217 00:12:21.472900 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Feb 17 00:12:22 crc kubenswrapper[5109]: E0217 00:12:22.369178 5109 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.199:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894e04471b86c0b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 00:12:18.033757195 +0000 UTC m=+209.365311953,LastTimestamp:2026-02-17 00:12:18.033757195 +0000 UTC m=+209.365311953,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 00:12:24 crc kubenswrapper[5109]: E0217 00:12:24.154975 5109 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:24 crc kubenswrapper[5109]: E0217 00:12:24.155795 5109 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:24 crc kubenswrapper[5109]: E0217 00:12:24.156675 5109 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:24 crc kubenswrapper[5109]: E0217 00:12:24.157059 5109 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:24 crc kubenswrapper[5109]: E0217 00:12:24.157464 5109 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:24 crc kubenswrapper[5109]: I0217 00:12:24.157517 5109 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 17 00:12:24 crc kubenswrapper[5109]: E0217 00:12:24.157978 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="200ms" Feb 17 00:12:24 crc kubenswrapper[5109]: E0217 00:12:24.358930 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="400ms" Feb 17 00:12:24 crc kubenswrapper[5109]: E0217 00:12:24.760556 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="800ms" Feb 17 00:12:25 crc kubenswrapper[5109]: I0217 00:12:25.352893 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" podUID="f6badb98-48f9-46ff-9aca-7e1cecfb0ef2" containerName="oauth-openshift" containerID="cri-o://854ea946ed69699e59c5630202e48752d2e227d4c574ba5c9f267fcf2028141a" gracePeriod=15 Feb 17 00:12:25 crc kubenswrapper[5109]: E0217 00:12:25.562168 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="1.6s" Feb 17 00:12:25 crc kubenswrapper[5109]: I0217 00:12:25.918524 5109 generic.go:358] "Generic (PLEG): container finished" podID="f6badb98-48f9-46ff-9aca-7e1cecfb0ef2" containerID="854ea946ed69699e59c5630202e48752d2e227d4c574ba5c9f267fcf2028141a" exitCode=0 Feb 17 00:12:25 crc kubenswrapper[5109]: I0217 00:12:25.918628 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" event={"ID":"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2","Type":"ContainerDied","Data":"854ea946ed69699e59c5630202e48752d2e227d4c574ba5c9f267fcf2028141a"} Feb 17 00:12:25 crc kubenswrapper[5109]: I0217 00:12:25.918682 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" event={"ID":"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2","Type":"ContainerDied","Data":"6c00c8d264bfd7e4f470076c165cf8a0a1ab731ffbaeb5abcbabeea6a4b6d17a"} Feb 17 00:12:25 crc kubenswrapper[5109]: I0217 00:12:25.918707 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c00c8d264bfd7e4f470076c165cf8a0a1ab731ffbaeb5abcbabeea6a4b6d17a" Feb 17 00:12:25 crc kubenswrapper[5109]: I0217 00:12:25.928213 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:12:25 crc kubenswrapper[5109]: I0217 00:12:25.928883 5109 status_manager.go:895] "Failed to get status for pod" podUID="f6badb98-48f9-46ff-9aca-7e1cecfb0ef2" pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-c5txm\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:25 crc kubenswrapper[5109]: I0217 00:12:25.929215 5109 status_manager.go:895] "Failed to get status for pod" podUID="30062c3d-5b20-47fc-afb7-8c62131e36ac" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:25 crc kubenswrapper[5109]: I0217 00:12:25.929562 5109 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.007562 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-user-idp-0-file-data\") pod \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.007692 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-user-template-provider-selection\") pod \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.007763 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zc52\" (UniqueName: \"kubernetes.io/projected/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-kube-api-access-9zc52\") pod \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.007804 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-audit-dir\") pod \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.007880 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-cliconfig\") pod \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.007916 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-audit-policies\") pod \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.007960 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-ocp-branding-template\") pod \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.007994 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-serving-cert\") pod \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.008030 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-router-certs\") pod \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.008071 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-service-ca\") pod \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.008129 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-user-template-error\") pod \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.008175 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-session\") pod \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.008211 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-user-template-login\") pod \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.008250 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-trusted-ca-bundle\") pod \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\" (UID: \"f6badb98-48f9-46ff-9aca-7e1cecfb0ef2\") " Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.009101 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f6badb98-48f9-46ff-9aca-7e1cecfb0ef2" (UID: "f6badb98-48f9-46ff-9aca-7e1cecfb0ef2"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.010541 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "f6badb98-48f9-46ff-9aca-7e1cecfb0ef2" (UID: "f6badb98-48f9-46ff-9aca-7e1cecfb0ef2"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.010565 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "f6badb98-48f9-46ff-9aca-7e1cecfb0ef2" (UID: "f6badb98-48f9-46ff-9aca-7e1cecfb0ef2"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.010916 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f6badb98-48f9-46ff-9aca-7e1cecfb0ef2" (UID: "f6badb98-48f9-46ff-9aca-7e1cecfb0ef2"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.011020 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "f6badb98-48f9-46ff-9aca-7e1cecfb0ef2" (UID: "f6badb98-48f9-46ff-9aca-7e1cecfb0ef2"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.015185 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "f6badb98-48f9-46ff-9aca-7e1cecfb0ef2" (UID: "f6badb98-48f9-46ff-9aca-7e1cecfb0ef2"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.015276 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-kube-api-access-9zc52" (OuterVolumeSpecName: "kube-api-access-9zc52") pod "f6badb98-48f9-46ff-9aca-7e1cecfb0ef2" (UID: "f6badb98-48f9-46ff-9aca-7e1cecfb0ef2"). InnerVolumeSpecName "kube-api-access-9zc52". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.015538 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "f6badb98-48f9-46ff-9aca-7e1cecfb0ef2" (UID: "f6badb98-48f9-46ff-9aca-7e1cecfb0ef2"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.020169 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "f6badb98-48f9-46ff-9aca-7e1cecfb0ef2" (UID: "f6badb98-48f9-46ff-9aca-7e1cecfb0ef2"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.020916 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "f6badb98-48f9-46ff-9aca-7e1cecfb0ef2" (UID: "f6badb98-48f9-46ff-9aca-7e1cecfb0ef2"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.021231 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "f6badb98-48f9-46ff-9aca-7e1cecfb0ef2" (UID: "f6badb98-48f9-46ff-9aca-7e1cecfb0ef2"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.021711 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "f6badb98-48f9-46ff-9aca-7e1cecfb0ef2" (UID: "f6badb98-48f9-46ff-9aca-7e1cecfb0ef2"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.021999 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "f6badb98-48f9-46ff-9aca-7e1cecfb0ef2" (UID: "f6badb98-48f9-46ff-9aca-7e1cecfb0ef2"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.022085 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "f6badb98-48f9-46ff-9aca-7e1cecfb0ef2" (UID: "f6badb98-48f9-46ff-9aca-7e1cecfb0ef2"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.110036 5109 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.110085 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.110108 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.110126 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.110145 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.110164 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.110182 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.110204 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.110224 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.110242 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.110260 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.110279 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9zc52\" (UniqueName: \"kubernetes.io/projected/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-kube-api-access-9zc52\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.110296 5109 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.110313 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.923089 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.923927 5109 status_manager.go:895] "Failed to get status for pod" podUID="f6badb98-48f9-46ff-9aca-7e1cecfb0ef2" pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-c5txm\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.924465 5109 status_manager.go:895] "Failed to get status for pod" podUID="30062c3d-5b20-47fc-afb7-8c62131e36ac" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.925086 5109 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.938051 5109 status_manager.go:895] "Failed to get status for pod" podUID="f6badb98-48f9-46ff-9aca-7e1cecfb0ef2" pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-c5txm\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.938345 5109 status_manager.go:895] "Failed to get status for pod" podUID="30062c3d-5b20-47fc-afb7-8c62131e36ac" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:26 crc kubenswrapper[5109]: I0217 00:12:26.938722 5109 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:27 crc kubenswrapper[5109]: E0217 00:12:27.163712 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.199:6443: connect: connection refused" interval="3.2s" Feb 17 00:12:29 crc kubenswrapper[5109]: I0217 00:12:29.471686 5109 status_manager.go:895] "Failed to get status for pod" podUID="f6badb98-48f9-46ff-9aca-7e1cecfb0ef2" pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-c5txm\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:29 crc kubenswrapper[5109]: I0217 00:12:29.472580 5109 status_manager.go:895] "Failed to get status for pod" podUID="30062c3d-5b20-47fc-afb7-8c62131e36ac" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:29 crc kubenswrapper[5109]: I0217 00:12:29.473175 5109 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:29 crc kubenswrapper[5109]: I0217 00:12:29.476686 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:12:29 crc kubenswrapper[5109]: I0217 00:12:29.478372 5109 status_manager.go:895] "Failed to get status for pod" podUID="f6badb98-48f9-46ff-9aca-7e1cecfb0ef2" pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-c5txm\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:29 crc kubenswrapper[5109]: I0217 00:12:29.479271 5109 status_manager.go:895] "Failed to get status for pod" podUID="30062c3d-5b20-47fc-afb7-8c62131e36ac" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:29 crc kubenswrapper[5109]: I0217 00:12:29.480086 5109 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:29 crc kubenswrapper[5109]: I0217 00:12:29.495006 5109 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5d30b138-f18b-4f7c-b73f-d35ade3012e5" Feb 17 00:12:29 crc kubenswrapper[5109]: I0217 00:12:29.495178 5109 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5d30b138-f18b-4f7c-b73f-d35ade3012e5" Feb 17 00:12:29 crc kubenswrapper[5109]: E0217 00:12:29.496020 5109 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:12:29 crc kubenswrapper[5109]: I0217 00:12:29.496342 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:12:29 crc kubenswrapper[5109]: I0217 00:12:29.942198 5109 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="f64337f1818be96e7e96778039853e94c63f3b3c9971ea5818e5fcdf7b2b0e5a" exitCode=0 Feb 17 00:12:29 crc kubenswrapper[5109]: I0217 00:12:29.942253 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"f64337f1818be96e7e96778039853e94c63f3b3c9971ea5818e5fcdf7b2b0e5a"} Feb 17 00:12:29 crc kubenswrapper[5109]: I0217 00:12:29.942313 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"9d97a801f2857c9606f1ed823366793910e6aae08eb1ed2cfe1b570c97270264"} Feb 17 00:12:29 crc kubenswrapper[5109]: I0217 00:12:29.942575 5109 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5d30b138-f18b-4f7c-b73f-d35ade3012e5" Feb 17 00:12:29 crc kubenswrapper[5109]: I0217 00:12:29.942586 5109 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5d30b138-f18b-4f7c-b73f-d35ade3012e5" Feb 17 00:12:29 crc kubenswrapper[5109]: E0217 00:12:29.942968 5109 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:12:29 crc kubenswrapper[5109]: I0217 00:12:29.943142 5109 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:29 crc kubenswrapper[5109]: I0217 00:12:29.943339 5109 status_manager.go:895] "Failed to get status for pod" podUID="f6badb98-48f9-46ff-9aca-7e1cecfb0ef2" pod="openshift-authentication/oauth-openshift-66458b6674-c5txm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-c5txm\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:29 crc kubenswrapper[5109]: I0217 00:12:29.943534 5109 status_manager.go:895] "Failed to get status for pod" podUID="30062c3d-5b20-47fc-afb7-8c62131e36ac" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.199:6443: connect: connection refused" Feb 17 00:12:30 crc kubenswrapper[5109]: I0217 00:12:30.951635 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"6e277faf722122d0e6c7eff26cb7e83276ca69bb5a67e6cc2cd1be4312a93d6f"} Feb 17 00:12:30 crc kubenswrapper[5109]: I0217 00:12:30.951685 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"ed1fa83222a4b36e95cf8227507908ef3bfcbfb371c40bb746112f7bb636067e"} Feb 17 00:12:30 crc kubenswrapper[5109]: I0217 00:12:30.951699 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"91368563adb6bf87b6c95321a2870903b6ef712a1728fad14890fd949d02500c"} Feb 17 00:12:31 crc kubenswrapper[5109]: I0217 00:12:31.964581 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"3af305a769f2f3c7af13c8806ff9e455789c0acfb54386c4088808ce08ac0e10"} Feb 17 00:12:31 crc kubenswrapper[5109]: I0217 00:12:31.965281 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:12:31 crc kubenswrapper[5109]: I0217 00:12:31.964983 5109 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5d30b138-f18b-4f7c-b73f-d35ade3012e5" Feb 17 00:12:31 crc kubenswrapper[5109]: I0217 00:12:31.965406 5109 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5d30b138-f18b-4f7c-b73f-d35ade3012e5" Feb 17 00:12:31 crc kubenswrapper[5109]: I0217 00:12:31.965355 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"b306b9f64f8c01c27732114c104d330365264526e1787c2f7435ad22a72491c8"} Feb 17 00:12:31 crc kubenswrapper[5109]: I0217 00:12:31.967753 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 17 00:12:31 crc kubenswrapper[5109]: I0217 00:12:31.967806 5109 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="6724cb7115224b3beafe0f51aabd64e0d52a6101b64fe1dd025f1b91232bc384" exitCode=1 Feb 17 00:12:31 crc kubenswrapper[5109]: I0217 00:12:31.967852 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"6724cb7115224b3beafe0f51aabd64e0d52a6101b64fe1dd025f1b91232bc384"} Feb 17 00:12:31 crc kubenswrapper[5109]: I0217 00:12:31.968758 5109 scope.go:117] "RemoveContainer" containerID="6724cb7115224b3beafe0f51aabd64e0d52a6101b64fe1dd025f1b91232bc384" Feb 17 00:12:32 crc kubenswrapper[5109]: I0217 00:12:32.976200 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 17 00:12:32 crc kubenswrapper[5109]: I0217 00:12:32.976645 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"ab91f65782b4d8ee4b3d6480bb299163547c4b3e8f40151975686fb41a80baec"} Feb 17 00:12:34 crc kubenswrapper[5109]: I0217 00:12:34.497004 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:12:34 crc kubenswrapper[5109]: I0217 00:12:34.497115 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:12:34 crc kubenswrapper[5109]: I0217 00:12:34.503788 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:12:36 crc kubenswrapper[5109]: I0217 00:12:36.987151 5109 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:12:36 crc kubenswrapper[5109]: I0217 00:12:36.988065 5109 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:12:37 crc kubenswrapper[5109]: I0217 00:12:37.007353 5109 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5d30b138-f18b-4f7c-b73f-d35ade3012e5" Feb 17 00:12:37 crc kubenswrapper[5109]: I0217 00:12:37.007387 5109 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5d30b138-f18b-4f7c-b73f-d35ade3012e5" Feb 17 00:12:37 crc kubenswrapper[5109]: I0217 00:12:37.015262 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:12:38 crc kubenswrapper[5109]: I0217 00:12:38.013572 5109 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5d30b138-f18b-4f7c-b73f-d35ade3012e5" Feb 17 00:12:38 crc kubenswrapper[5109]: I0217 00:12:38.014418 5109 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5d30b138-f18b-4f7c-b73f-d35ade3012e5" Feb 17 00:12:38 crc kubenswrapper[5109]: I0217 00:12:38.063433 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:12:38 crc kubenswrapper[5109]: I0217 00:12:38.068477 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:12:39 crc kubenswrapper[5109]: I0217 00:12:39.018410 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:12:39 crc kubenswrapper[5109]: I0217 00:12:39.490409 5109 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="8cbeed48-738e-49f5-aec9-4340b61ba9e0" Feb 17 00:12:43 crc kubenswrapper[5109]: I0217 00:12:43.874223 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Feb 17 00:12:44 crc kubenswrapper[5109]: I0217 00:12:44.142836 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Feb 17 00:12:45 crc kubenswrapper[5109]: I0217 00:12:45.642606 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Feb 17 00:12:47 crc kubenswrapper[5109]: I0217 00:12:47.324347 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 17 00:12:47 crc kubenswrapper[5109]: I0217 00:12:47.534990 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Feb 17 00:12:47 crc kubenswrapper[5109]: I0217 00:12:47.703525 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Feb 17 00:12:48 crc kubenswrapper[5109]: I0217 00:12:48.085101 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Feb 17 00:12:48 crc kubenswrapper[5109]: I0217 00:12:48.545112 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Feb 17 00:12:49 crc kubenswrapper[5109]: I0217 00:12:49.069841 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Feb 17 00:12:49 crc kubenswrapper[5109]: I0217 00:12:49.106681 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Feb 17 00:12:49 crc kubenswrapper[5109]: I0217 00:12:49.868057 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Feb 17 00:12:49 crc kubenswrapper[5109]: I0217 00:12:49.883025 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Feb 17 00:12:49 crc kubenswrapper[5109]: I0217 00:12:49.989549 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Feb 17 00:12:50 crc kubenswrapper[5109]: I0217 00:12:50.028650 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 00:12:50 crc kubenswrapper[5109]: I0217 00:12:50.198192 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Feb 17 00:12:50 crc kubenswrapper[5109]: I0217 00:12:50.312195 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Feb 17 00:12:50 crc kubenswrapper[5109]: I0217 00:12:50.316428 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Feb 17 00:12:50 crc kubenswrapper[5109]: I0217 00:12:50.324693 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Feb 17 00:12:50 crc kubenswrapper[5109]: I0217 00:12:50.395837 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Feb 17 00:12:50 crc kubenswrapper[5109]: I0217 00:12:50.511888 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Feb 17 00:12:50 crc kubenswrapper[5109]: I0217 00:12:50.521428 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Feb 17 00:12:50 crc kubenswrapper[5109]: I0217 00:12:50.649175 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Feb 17 00:12:50 crc kubenswrapper[5109]: I0217 00:12:50.733794 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Feb 17 00:12:51 crc kubenswrapper[5109]: I0217 00:12:51.212567 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 17 00:12:51 crc kubenswrapper[5109]: I0217 00:12:51.423378 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Feb 17 00:12:51 crc kubenswrapper[5109]: I0217 00:12:51.532181 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Feb 17 00:12:51 crc kubenswrapper[5109]: I0217 00:12:51.563440 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Feb 17 00:12:51 crc kubenswrapper[5109]: I0217 00:12:51.795578 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Feb 17 00:12:51 crc kubenswrapper[5109]: I0217 00:12:51.814653 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Feb 17 00:12:51 crc kubenswrapper[5109]: I0217 00:12:51.814913 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Feb 17 00:12:51 crc kubenswrapper[5109]: I0217 00:12:51.906481 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Feb 17 00:12:51 crc kubenswrapper[5109]: I0217 00:12:51.907961 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Feb 17 00:12:52 crc kubenswrapper[5109]: I0217 00:12:52.033841 5109 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Feb 17 00:12:52 crc kubenswrapper[5109]: I0217 00:12:52.055565 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Feb 17 00:12:52 crc kubenswrapper[5109]: I0217 00:12:52.237266 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Feb 17 00:12:52 crc kubenswrapper[5109]: I0217 00:12:52.319630 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Feb 17 00:12:52 crc kubenswrapper[5109]: I0217 00:12:52.350265 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Feb 17 00:12:52 crc kubenswrapper[5109]: I0217 00:12:52.404806 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Feb 17 00:12:52 crc kubenswrapper[5109]: I0217 00:12:52.592199 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Feb 17 00:12:52 crc kubenswrapper[5109]: I0217 00:12:52.624862 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Feb 17 00:12:52 crc kubenswrapper[5109]: I0217 00:12:52.664778 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Feb 17 00:12:52 crc kubenswrapper[5109]: I0217 00:12:52.684833 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Feb 17 00:12:52 crc kubenswrapper[5109]: I0217 00:12:52.800432 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Feb 17 00:12:52 crc kubenswrapper[5109]: I0217 00:12:52.830211 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Feb 17 00:12:52 crc kubenswrapper[5109]: I0217 00:12:52.849266 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Feb 17 00:12:52 crc kubenswrapper[5109]: I0217 00:12:52.897861 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Feb 17 00:12:52 crc kubenswrapper[5109]: I0217 00:12:52.916882 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Feb 17 00:12:52 crc kubenswrapper[5109]: I0217 00:12:52.963021 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 17 00:12:53 crc kubenswrapper[5109]: I0217 00:12:53.026518 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Feb 17 00:12:53 crc kubenswrapper[5109]: I0217 00:12:53.083250 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Feb 17 00:12:53 crc kubenswrapper[5109]: I0217 00:12:53.225937 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Feb 17 00:12:53 crc kubenswrapper[5109]: I0217 00:12:53.356402 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Feb 17 00:12:53 crc kubenswrapper[5109]: I0217 00:12:53.385563 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Feb 17 00:12:53 crc kubenswrapper[5109]: I0217 00:12:53.437903 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Feb 17 00:12:53 crc kubenswrapper[5109]: I0217 00:12:53.449250 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Feb 17 00:12:53 crc kubenswrapper[5109]: I0217 00:12:53.462221 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Feb 17 00:12:53 crc kubenswrapper[5109]: I0217 00:12:53.524018 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Feb 17 00:12:53 crc kubenswrapper[5109]: I0217 00:12:53.573017 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Feb 17 00:12:53 crc kubenswrapper[5109]: I0217 00:12:53.611748 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Feb 17 00:12:53 crc kubenswrapper[5109]: I0217 00:12:53.624946 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Feb 17 00:12:53 crc kubenswrapper[5109]: I0217 00:12:53.655382 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Feb 17 00:12:53 crc kubenswrapper[5109]: I0217 00:12:53.781942 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Feb 17 00:12:53 crc kubenswrapper[5109]: I0217 00:12:53.792566 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"pruner-dockercfg-rs58m\"" Feb 17 00:12:53 crc kubenswrapper[5109]: I0217 00:12:53.804713 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Feb 17 00:12:54 crc kubenswrapper[5109]: I0217 00:12:54.012944 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Feb 17 00:12:54 crc kubenswrapper[5109]: I0217 00:12:54.122770 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Feb 17 00:12:54 crc kubenswrapper[5109]: I0217 00:12:54.139444 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Feb 17 00:12:54 crc kubenswrapper[5109]: I0217 00:12:54.140742 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Feb 17 00:12:54 crc kubenswrapper[5109]: I0217 00:12:54.245832 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Feb 17 00:12:54 crc kubenswrapper[5109]: I0217 00:12:54.260829 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Feb 17 00:12:54 crc kubenswrapper[5109]: I0217 00:12:54.281001 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Feb 17 00:12:54 crc kubenswrapper[5109]: I0217 00:12:54.335567 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Feb 17 00:12:54 crc kubenswrapper[5109]: I0217 00:12:54.347679 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Feb 17 00:12:54 crc kubenswrapper[5109]: I0217 00:12:54.366928 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Feb 17 00:12:54 crc kubenswrapper[5109]: I0217 00:12:54.387434 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Feb 17 00:12:54 crc kubenswrapper[5109]: I0217 00:12:54.407107 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Feb 17 00:12:54 crc kubenswrapper[5109]: I0217 00:12:54.413864 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Feb 17 00:12:54 crc kubenswrapper[5109]: I0217 00:12:54.598335 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Feb 17 00:12:54 crc kubenswrapper[5109]: I0217 00:12:54.599653 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Feb 17 00:12:54 crc kubenswrapper[5109]: I0217 00:12:54.654298 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Feb 17 00:12:54 crc kubenswrapper[5109]: I0217 00:12:54.658999 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Feb 17 00:12:54 crc kubenswrapper[5109]: I0217 00:12:54.669924 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Feb 17 00:12:54 crc kubenswrapper[5109]: I0217 00:12:54.676123 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Feb 17 00:12:54 crc kubenswrapper[5109]: I0217 00:12:54.756834 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Feb 17 00:12:54 crc kubenswrapper[5109]: I0217 00:12:54.839649 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Feb 17 00:12:54 crc kubenswrapper[5109]: I0217 00:12:54.848186 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Feb 17 00:12:54 crc kubenswrapper[5109]: I0217 00:12:54.854994 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Feb 17 00:12:54 crc kubenswrapper[5109]: I0217 00:12:54.869877 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Feb 17 00:12:54 crc kubenswrapper[5109]: I0217 00:12:54.963333 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Feb 17 00:12:54 crc kubenswrapper[5109]: I0217 00:12:54.967354 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Feb 17 00:12:55 crc kubenswrapper[5109]: I0217 00:12:55.068336 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Feb 17 00:12:55 crc kubenswrapper[5109]: I0217 00:12:55.173614 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Feb 17 00:12:55 crc kubenswrapper[5109]: I0217 00:12:55.253426 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Feb 17 00:12:55 crc kubenswrapper[5109]: I0217 00:12:55.267630 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Feb 17 00:12:55 crc kubenswrapper[5109]: I0217 00:12:55.682979 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Feb 17 00:12:55 crc kubenswrapper[5109]: I0217 00:12:55.684479 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Feb 17 00:12:55 crc kubenswrapper[5109]: I0217 00:12:55.704769 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Feb 17 00:12:55 crc kubenswrapper[5109]: I0217 00:12:55.712400 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Feb 17 00:12:55 crc kubenswrapper[5109]: I0217 00:12:55.736875 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Feb 17 00:12:55 crc kubenswrapper[5109]: I0217 00:12:55.788149 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Feb 17 00:12:55 crc kubenswrapper[5109]: I0217 00:12:55.801454 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Feb 17 00:12:55 crc kubenswrapper[5109]: I0217 00:12:55.844886 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Feb 17 00:12:55 crc kubenswrapper[5109]: I0217 00:12:55.876309 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Feb 17 00:12:55 crc kubenswrapper[5109]: I0217 00:12:55.876489 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Feb 17 00:12:55 crc kubenswrapper[5109]: I0217 00:12:55.958796 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Feb 17 00:12:55 crc kubenswrapper[5109]: I0217 00:12:55.989719 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Feb 17 00:12:56 crc kubenswrapper[5109]: I0217 00:12:56.003581 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Feb 17 00:12:56 crc kubenswrapper[5109]: I0217 00:12:56.100151 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Feb 17 00:12:56 crc kubenswrapper[5109]: I0217 00:12:56.156851 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Feb 17 00:12:56 crc kubenswrapper[5109]: I0217 00:12:56.209364 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Feb 17 00:12:56 crc kubenswrapper[5109]: I0217 00:12:56.218097 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Feb 17 00:12:56 crc kubenswrapper[5109]: I0217 00:12:56.222518 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 17 00:12:56 crc kubenswrapper[5109]: I0217 00:12:56.391154 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Feb 17 00:12:56 crc kubenswrapper[5109]: I0217 00:12:56.514579 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Feb 17 00:12:56 crc kubenswrapper[5109]: I0217 00:12:56.620784 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Feb 17 00:12:56 crc kubenswrapper[5109]: I0217 00:12:56.692451 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Feb 17 00:12:56 crc kubenswrapper[5109]: I0217 00:12:56.702346 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Feb 17 00:12:56 crc kubenswrapper[5109]: I0217 00:12:56.823943 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Feb 17 00:12:56 crc kubenswrapper[5109]: I0217 00:12:56.849489 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Feb 17 00:12:56 crc kubenswrapper[5109]: I0217 00:12:56.932362 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Feb 17 00:12:57 crc kubenswrapper[5109]: I0217 00:12:57.091033 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 17 00:12:57 crc kubenswrapper[5109]: I0217 00:12:57.123743 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Feb 17 00:12:57 crc kubenswrapper[5109]: I0217 00:12:57.255047 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Feb 17 00:12:57 crc kubenswrapper[5109]: I0217 00:12:57.369471 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Feb 17 00:12:57 crc kubenswrapper[5109]: I0217 00:12:57.395404 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Feb 17 00:12:57 crc kubenswrapper[5109]: I0217 00:12:57.395467 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Feb 17 00:12:57 crc kubenswrapper[5109]: I0217 00:12:57.484765 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Feb 17 00:12:57 crc kubenswrapper[5109]: I0217 00:12:57.523029 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Feb 17 00:12:57 crc kubenswrapper[5109]: I0217 00:12:57.583972 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Feb 17 00:12:57 crc kubenswrapper[5109]: I0217 00:12:57.594722 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Feb 17 00:12:57 crc kubenswrapper[5109]: I0217 00:12:57.694740 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Feb 17 00:12:57 crc kubenswrapper[5109]: I0217 00:12:57.759559 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Feb 17 00:12:57 crc kubenswrapper[5109]: I0217 00:12:57.815280 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Feb 17 00:12:57 crc kubenswrapper[5109]: I0217 00:12:57.867539 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 17 00:12:58 crc kubenswrapper[5109]: I0217 00:12:58.066485 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Feb 17 00:12:58 crc kubenswrapper[5109]: I0217 00:12:58.110022 5109 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 17 00:12:58 crc kubenswrapper[5109]: I0217 00:12:58.125781 5109 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Feb 17 00:12:58 crc kubenswrapper[5109]: I0217 00:12:58.229192 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Feb 17 00:12:58 crc kubenswrapper[5109]: I0217 00:12:58.316578 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Feb 17 00:12:58 crc kubenswrapper[5109]: I0217 00:12:58.342688 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Feb 17 00:12:58 crc kubenswrapper[5109]: I0217 00:12:58.346077 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Feb 17 00:12:58 crc kubenswrapper[5109]: I0217 00:12:58.377354 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Feb 17 00:12:58 crc kubenswrapper[5109]: I0217 00:12:58.451099 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Feb 17 00:12:58 crc kubenswrapper[5109]: I0217 00:12:58.550778 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Feb 17 00:12:58 crc kubenswrapper[5109]: I0217 00:12:58.671528 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Feb 17 00:12:58 crc kubenswrapper[5109]: I0217 00:12:58.689625 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Feb 17 00:12:58 crc kubenswrapper[5109]: I0217 00:12:58.709662 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Feb 17 00:12:58 crc kubenswrapper[5109]: I0217 00:12:58.783952 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Feb 17 00:12:58 crc kubenswrapper[5109]: I0217 00:12:58.794625 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Feb 17 00:12:58 crc kubenswrapper[5109]: I0217 00:12:58.831690 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Feb 17 00:12:58 crc kubenswrapper[5109]: I0217 00:12:58.863722 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Feb 17 00:12:58 crc kubenswrapper[5109]: I0217 00:12:58.885895 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Feb 17 00:12:58 crc kubenswrapper[5109]: I0217 00:12:58.927378 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Feb 17 00:12:58 crc kubenswrapper[5109]: I0217 00:12:58.927473 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Feb 17 00:12:58 crc kubenswrapper[5109]: I0217 00:12:58.959221 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Feb 17 00:12:58 crc kubenswrapper[5109]: I0217 00:12:58.988236 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Feb 17 00:12:59 crc kubenswrapper[5109]: I0217 00:12:59.065547 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Feb 17 00:12:59 crc kubenswrapper[5109]: I0217 00:12:59.118675 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Feb 17 00:12:59 crc kubenswrapper[5109]: I0217 00:12:59.127671 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Feb 17 00:12:59 crc kubenswrapper[5109]: I0217 00:12:59.223351 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Feb 17 00:12:59 crc kubenswrapper[5109]: I0217 00:12:59.255334 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Feb 17 00:12:59 crc kubenswrapper[5109]: I0217 00:12:59.288221 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Feb 17 00:12:59 crc kubenswrapper[5109]: I0217 00:12:59.301824 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Feb 17 00:12:59 crc kubenswrapper[5109]: I0217 00:12:59.301965 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Feb 17 00:12:59 crc kubenswrapper[5109]: I0217 00:12:59.312067 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Feb 17 00:12:59 crc kubenswrapper[5109]: I0217 00:12:59.319808 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Feb 17 00:12:59 crc kubenswrapper[5109]: I0217 00:12:59.363188 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Feb 17 00:12:59 crc kubenswrapper[5109]: I0217 00:12:59.382053 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"serviceca\"" Feb 17 00:12:59 crc kubenswrapper[5109]: I0217 00:12:59.396120 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Feb 17 00:12:59 crc kubenswrapper[5109]: I0217 00:12:59.566036 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Feb 17 00:12:59 crc kubenswrapper[5109]: I0217 00:12:59.606656 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 17 00:12:59 crc kubenswrapper[5109]: I0217 00:12:59.647316 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Feb 17 00:12:59 crc kubenswrapper[5109]: I0217 00:12:59.682460 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Feb 17 00:12:59 crc kubenswrapper[5109]: I0217 00:12:59.691935 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Feb 17 00:12:59 crc kubenswrapper[5109]: I0217 00:12:59.715710 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Feb 17 00:12:59 crc kubenswrapper[5109]: I0217 00:12:59.731432 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Feb 17 00:12:59 crc kubenswrapper[5109]: I0217 00:12:59.940123 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Feb 17 00:12:59 crc kubenswrapper[5109]: I0217 00:12:59.994908 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Feb 17 00:13:00 crc kubenswrapper[5109]: I0217 00:12:59.999956 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Feb 17 00:13:00 crc kubenswrapper[5109]: I0217 00:13:00.017412 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Feb 17 00:13:00 crc kubenswrapper[5109]: I0217 00:13:00.070877 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Feb 17 00:13:00 crc kubenswrapper[5109]: I0217 00:13:00.215913 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Feb 17 00:13:00 crc kubenswrapper[5109]: I0217 00:13:00.220465 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Feb 17 00:13:00 crc kubenswrapper[5109]: I0217 00:13:00.236181 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Feb 17 00:13:00 crc kubenswrapper[5109]: I0217 00:13:00.267519 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Feb 17 00:13:00 crc kubenswrapper[5109]: I0217 00:13:00.354917 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Feb 17 00:13:00 crc kubenswrapper[5109]: I0217 00:13:00.531921 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Feb 17 00:13:00 crc kubenswrapper[5109]: I0217 00:13:00.535673 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Feb 17 00:13:00 crc kubenswrapper[5109]: I0217 00:13:00.572146 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Feb 17 00:13:00 crc kubenswrapper[5109]: I0217 00:13:00.587150 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Feb 17 00:13:00 crc kubenswrapper[5109]: I0217 00:13:00.642304 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Feb 17 00:13:00 crc kubenswrapper[5109]: I0217 00:13:00.799741 5109 patch_prober.go:28] interesting pod/machine-config-daemon-hjvm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:13:00 crc kubenswrapper[5109]: I0217 00:13:00.799854 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" podUID="5867f26a-eddd-4d0b-bfa3-e7c68e976330" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:13:00 crc kubenswrapper[5109]: I0217 00:13:00.917917 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 17 00:13:00 crc kubenswrapper[5109]: I0217 00:13:00.941100 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Feb 17 00:13:00 crc kubenswrapper[5109]: I0217 00:13:00.972941 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Feb 17 00:13:01 crc kubenswrapper[5109]: I0217 00:13:01.011259 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Feb 17 00:13:01 crc kubenswrapper[5109]: I0217 00:13:01.031330 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Feb 17 00:13:01 crc kubenswrapper[5109]: I0217 00:13:01.102383 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Feb 17 00:13:01 crc kubenswrapper[5109]: I0217 00:13:01.144305 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Feb 17 00:13:01 crc kubenswrapper[5109]: I0217 00:13:01.358350 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Feb 17 00:13:01 crc kubenswrapper[5109]: I0217 00:13:01.491450 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Feb 17 00:13:01 crc kubenswrapper[5109]: I0217 00:13:01.547542 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Feb 17 00:13:01 crc kubenswrapper[5109]: I0217 00:13:01.573662 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Feb 17 00:13:01 crc kubenswrapper[5109]: I0217 00:13:01.574783 5109 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Feb 17 00:13:01 crc kubenswrapper[5109]: I0217 00:13:01.708179 5109 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Feb 17 00:13:01 crc kubenswrapper[5109]: I0217 00:13:01.713102 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Feb 17 00:13:01 crc kubenswrapper[5109]: I0217 00:13:01.713492 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=44.713458582 podStartE2EDuration="44.713458582s" podCreationTimestamp="2026-02-17 00:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:12:37.058147559 +0000 UTC m=+228.389702317" watchObservedRunningTime="2026-02-17 00:13:01.713458582 +0000 UTC m=+253.045013380" Feb 17 00:13:01 crc kubenswrapper[5109]: I0217 00:13:01.720055 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Feb 17 00:13:01 crc kubenswrapper[5109]: I0217 00:13:01.726092 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-66458b6674-c5txm"] Feb 17 00:13:01 crc kubenswrapper[5109]: I0217 00:13:01.726181 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 00:13:01 crc kubenswrapper[5109]: I0217 00:13:01.732362 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 00:13:01 crc kubenswrapper[5109]: I0217 00:13:01.765483 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=25.765450595 podStartE2EDuration="25.765450595s" podCreationTimestamp="2026-02-17 00:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:13:01.751803954 +0000 UTC m=+253.083358732" watchObservedRunningTime="2026-02-17 00:13:01.765450595 +0000 UTC m=+253.097005383" Feb 17 00:13:01 crc kubenswrapper[5109]: I0217 00:13:01.860044 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Feb 17 00:13:01 crc kubenswrapper[5109]: I0217 00:13:01.868354 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Feb 17 00:13:01 crc kubenswrapper[5109]: I0217 00:13:01.890812 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Feb 17 00:13:01 crc kubenswrapper[5109]: I0217 00:13:01.933757 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Feb 17 00:13:01 crc kubenswrapper[5109]: I0217 00:13:01.987642 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Feb 17 00:13:01 crc kubenswrapper[5109]: I0217 00:13:01.994663 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Feb 17 00:13:02 crc kubenswrapper[5109]: I0217 00:13:02.073380 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Feb 17 00:13:02 crc kubenswrapper[5109]: I0217 00:13:02.163421 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Feb 17 00:13:02 crc kubenswrapper[5109]: I0217 00:13:02.278354 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Feb 17 00:13:02 crc kubenswrapper[5109]: I0217 00:13:02.297290 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Feb 17 00:13:02 crc kubenswrapper[5109]: I0217 00:13:02.319886 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Feb 17 00:13:02 crc kubenswrapper[5109]: I0217 00:13:02.444190 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Feb 17 00:13:02 crc kubenswrapper[5109]: I0217 00:13:02.483620 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Feb 17 00:13:02 crc kubenswrapper[5109]: I0217 00:13:02.485495 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Feb 17 00:13:02 crc kubenswrapper[5109]: I0217 00:13:02.671166 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Feb 17 00:13:02 crc kubenswrapper[5109]: I0217 00:13:02.734457 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Feb 17 00:13:03 crc kubenswrapper[5109]: I0217 00:13:03.192110 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Feb 17 00:13:03 crc kubenswrapper[5109]: I0217 00:13:03.205659 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Feb 17 00:13:03 crc kubenswrapper[5109]: I0217 00:13:03.366478 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Feb 17 00:13:03 crc kubenswrapper[5109]: I0217 00:13:03.435204 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Feb 17 00:13:03 crc kubenswrapper[5109]: I0217 00:13:03.476370 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6badb98-48f9-46ff-9aca-7e1cecfb0ef2" path="/var/lib/kubelet/pods/f6badb98-48f9-46ff-9aca-7e1cecfb0ef2/volumes" Feb 17 00:13:03 crc kubenswrapper[5109]: I0217 00:13:03.509862 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Feb 17 00:13:03 crc kubenswrapper[5109]: I0217 00:13:03.660440 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Feb 17 00:13:03 crc kubenswrapper[5109]: I0217 00:13:03.834876 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Feb 17 00:13:04 crc kubenswrapper[5109]: I0217 00:13:04.067786 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Feb 17 00:13:04 crc kubenswrapper[5109]: I0217 00:13:04.132171 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Feb 17 00:13:04 crc kubenswrapper[5109]: I0217 00:13:04.290638 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Feb 17 00:13:04 crc kubenswrapper[5109]: I0217 00:13:04.653467 5109 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Feb 17 00:13:04 crc kubenswrapper[5109]: I0217 00:13:04.842477 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Feb 17 00:13:05 crc kubenswrapper[5109]: I0217 00:13:05.052026 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Feb 17 00:13:05 crc kubenswrapper[5109]: I0217 00:13:05.156990 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Feb 17 00:13:05 crc kubenswrapper[5109]: I0217 00:13:05.337126 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Feb 17 00:13:05 crc kubenswrapper[5109]: I0217 00:13:05.384323 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Feb 17 00:13:05 crc kubenswrapper[5109]: I0217 00:13:05.425239 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.099008 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.464854 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-57ffdf54dd-sddhn"] Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.465830 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f6badb98-48f9-46ff-9aca-7e1cecfb0ef2" containerName="oauth-openshift" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.465862 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6badb98-48f9-46ff-9aca-7e1cecfb0ef2" containerName="oauth-openshift" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.465892 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="30062c3d-5b20-47fc-afb7-8c62131e36ac" containerName="installer" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.465900 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="30062c3d-5b20-47fc-afb7-8c62131e36ac" containerName="installer" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.466059 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="f6badb98-48f9-46ff-9aca-7e1cecfb0ef2" containerName="oauth-openshift" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.466086 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="30062c3d-5b20-47fc-afb7-8c62131e36ac" containerName="installer" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.470913 5109 ???:1] "http: TLS handshake error from 192.168.126.11:40858: no serving certificate available for the kubelet" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.482089 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.483185 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-57ffdf54dd-sddhn"] Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.489366 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.490294 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.490431 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.490493 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.490564 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.490646 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.490729 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.491137 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.491258 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.491285 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.491414 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.492024 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.492091 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.499314 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.506818 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.512558 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.553673 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.553714 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-user-template-login\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.553737 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.553770 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-system-session\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.553855 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f9a314b2-2737-49ae-9efb-a02961dbdd1b-audit-policies\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.553905 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f9a314b2-2737-49ae-9efb-a02961dbdd1b-audit-dir\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.553957 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.553983 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-system-service-ca\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.554027 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.554057 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.554078 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl4wn\" (UniqueName: \"kubernetes.io/projected/f9a314b2-2737-49ae-9efb-a02961dbdd1b-kube-api-access-bl4wn\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.554101 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-user-template-error\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.554154 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-system-router-certs\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.554305 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.608644 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.655614 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.655677 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.655703 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bl4wn\" (UniqueName: \"kubernetes.io/projected/f9a314b2-2737-49ae-9efb-a02961dbdd1b-kube-api-access-bl4wn\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.655732 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-user-template-error\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.655778 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-system-router-certs\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.655810 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.656283 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.656363 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-user-template-login\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.656398 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.656470 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-system-session\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.656550 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f9a314b2-2737-49ae-9efb-a02961dbdd1b-audit-policies\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.656579 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f9a314b2-2737-49ae-9efb-a02961dbdd1b-audit-dir\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.656633 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.656654 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-system-service-ca\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.657365 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.657399 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-system-service-ca\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.657650 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f9a314b2-2737-49ae-9efb-a02961dbdd1b-audit-dir\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.658071 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f9a314b2-2737-49ae-9efb-a02961dbdd1b-audit-policies\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.659092 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.662966 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.663264 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-system-router-certs\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.663264 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-system-session\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.664023 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.664113 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.664468 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-user-template-login\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.665132 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.665746 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f9a314b2-2737-49ae-9efb-a02961dbdd1b-v4-0-config-user-template-error\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.681797 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl4wn\" (UniqueName: \"kubernetes.io/projected/f9a314b2-2737-49ae-9efb-a02961dbdd1b-kube-api-access-bl4wn\") pod \"oauth-openshift-57ffdf54dd-sddhn\" (UID: \"f9a314b2-2737-49ae-9efb-a02961dbdd1b\") " pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:06 crc kubenswrapper[5109]: I0217 00:13:06.834937 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:07 crc kubenswrapper[5109]: I0217 00:13:07.282730 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-57ffdf54dd-sddhn"] Feb 17 00:13:08 crc kubenswrapper[5109]: I0217 00:13:08.234605 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" event={"ID":"f9a314b2-2737-49ae-9efb-a02961dbdd1b","Type":"ContainerStarted","Data":"8a33e624fdfa6d000188f8b94ec1e2da1d46015cd1d1643cf34bcc07cb7ae2ad"} Feb 17 00:13:08 crc kubenswrapper[5109]: I0217 00:13:08.234931 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:08 crc kubenswrapper[5109]: I0217 00:13:08.234942 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" event={"ID":"f9a314b2-2737-49ae-9efb-a02961dbdd1b","Type":"ContainerStarted","Data":"392603be6f3c93b3e56f986ffadfa11152885b2939941a35f203858485511ee0"} Feb 17 00:13:08 crc kubenswrapper[5109]: I0217 00:13:08.242538 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" Feb 17 00:13:08 crc kubenswrapper[5109]: I0217 00:13:08.258724 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-57ffdf54dd-sddhn" podStartSLOduration=68.258704081 podStartE2EDuration="1m8.258704081s" podCreationTimestamp="2026-02-17 00:12:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:13:08.255753469 +0000 UTC m=+259.587308237" watchObservedRunningTime="2026-02-17 00:13:08.258704081 +0000 UTC m=+259.590258849" Feb 17 00:13:09 crc kubenswrapper[5109]: I0217 00:13:09.739017 5109 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 00:13:09 crc kubenswrapper[5109]: I0217 00:13:09.739640 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://6e102103b887cf54ee5a49259523e90efc2e670969b48f9cfd49a921f490d087" gracePeriod=5 Feb 17 00:13:15 crc kubenswrapper[5109]: I0217 00:13:15.280277 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Feb 17 00:13:15 crc kubenswrapper[5109]: I0217 00:13:15.280766 5109 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="6e102103b887cf54ee5a49259523e90efc2e670969b48f9cfd49a921f490d087" exitCode=137 Feb 17 00:13:15 crc kubenswrapper[5109]: I0217 00:13:15.344504 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Feb 17 00:13:15 crc kubenswrapper[5109]: I0217 00:13:15.344623 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:13:15 crc kubenswrapper[5109]: I0217 00:13:15.473839 5109 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 17 00:13:15 crc kubenswrapper[5109]: I0217 00:13:15.477311 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 17 00:13:15 crc kubenswrapper[5109]: I0217 00:13:15.477382 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 17 00:13:15 crc kubenswrapper[5109]: I0217 00:13:15.477466 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 17 00:13:15 crc kubenswrapper[5109]: I0217 00:13:15.477538 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 17 00:13:15 crc kubenswrapper[5109]: I0217 00:13:15.477562 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 17 00:13:15 crc kubenswrapper[5109]: I0217 00:13:15.477557 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 17 00:13:15 crc kubenswrapper[5109]: I0217 00:13:15.477661 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 17 00:13:15 crc kubenswrapper[5109]: I0217 00:13:15.477689 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 17 00:13:15 crc kubenswrapper[5109]: I0217 00:13:15.477779 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 17 00:13:15 crc kubenswrapper[5109]: I0217 00:13:15.478006 5109 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Feb 17 00:13:15 crc kubenswrapper[5109]: I0217 00:13:15.478024 5109 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Feb 17 00:13:15 crc kubenswrapper[5109]: I0217 00:13:15.478035 5109 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Feb 17 00:13:15 crc kubenswrapper[5109]: I0217 00:13:15.478047 5109 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 00:13:15 crc kubenswrapper[5109]: I0217 00:13:15.489181 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 00:13:15 crc kubenswrapper[5109]: I0217 00:13:15.489211 5109 kubelet.go:2759] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="2706810d-0b28-4bf2-9478-0a33567866da" Feb 17 00:13:15 crc kubenswrapper[5109]: I0217 00:13:15.489612 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 17 00:13:15 crc kubenswrapper[5109]: I0217 00:13:15.490413 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 00:13:15 crc kubenswrapper[5109]: I0217 00:13:15.490456 5109 kubelet.go:2784] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="2706810d-0b28-4bf2-9478-0a33567866da" Feb 17 00:13:15 crc kubenswrapper[5109]: I0217 00:13:15.579187 5109 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 00:13:16 crc kubenswrapper[5109]: I0217 00:13:16.287060 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Feb 17 00:13:16 crc kubenswrapper[5109]: I0217 00:13:16.287233 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 00:13:16 crc kubenswrapper[5109]: I0217 00:13:16.287293 5109 scope.go:117] "RemoveContainer" containerID="6e102103b887cf54ee5a49259523e90efc2e670969b48f9cfd49a921f490d087" Feb 17 00:13:16 crc kubenswrapper[5109]: I0217 00:13:16.289464 5109 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Feb 17 00:13:16 crc kubenswrapper[5109]: I0217 00:13:16.308194 5109 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Feb 17 00:13:17 crc kubenswrapper[5109]: I0217 00:13:17.473003 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Feb 17 00:13:24 crc kubenswrapper[5109]: I0217 00:13:24.347921 5109 generic.go:358] "Generic (PLEG): container finished" podID="a7fdfe97-5098-4468-92b7-881bc4270004" containerID="6ddcb511d3901b79452a2641dee52272af1679b1b48c4bc03a81f02273cf15c2" exitCode=0 Feb 17 00:13:24 crc kubenswrapper[5109]: I0217 00:13:24.348031 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-8f27m" event={"ID":"a7fdfe97-5098-4468-92b7-881bc4270004","Type":"ContainerDied","Data":"6ddcb511d3901b79452a2641dee52272af1679b1b48c4bc03a81f02273cf15c2"} Feb 17 00:13:24 crc kubenswrapper[5109]: I0217 00:13:24.350141 5109 scope.go:117] "RemoveContainer" containerID="6ddcb511d3901b79452a2641dee52272af1679b1b48c4bc03a81f02273cf15c2" Feb 17 00:13:25 crc kubenswrapper[5109]: I0217 00:13:25.357305 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-8f27m" event={"ID":"a7fdfe97-5098-4468-92b7-881bc4270004","Type":"ContainerStarted","Data":"8c2cf46d5b37c8f323151d39d4cbb929072c2c6b9074c801d279215badae10c8"} Feb 17 00:13:25 crc kubenswrapper[5109]: I0217 00:13:25.358087 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-8f27m" Feb 17 00:13:25 crc kubenswrapper[5109]: I0217 00:13:25.361324 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-8f27m" Feb 17 00:13:30 crc kubenswrapper[5109]: I0217 00:13:30.800615 5109 patch_prober.go:28] interesting pod/machine-config-daemon-hjvm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:13:30 crc kubenswrapper[5109]: I0217 00:13:30.801237 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" podUID="5867f26a-eddd-4d0b-bfa3-e7c68e976330" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:13:35 crc kubenswrapper[5109]: I0217 00:13:35.810847 5109 ???:1] "http: TLS handshake error from 192.168.126.11:42176: no serving certificate available for the kubelet" Feb 17 00:13:49 crc kubenswrapper[5109]: I0217 00:13:49.692098 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 17 00:13:49 crc kubenswrapper[5109]: I0217 00:13:49.692760 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 17 00:13:50 crc kubenswrapper[5109]: I0217 00:13:50.523455 5109 generic.go:358] "Generic (PLEG): container finished" podID="7a08715e-e52f-4251-9b13-72f93eacb031" containerID="798b9f619c469cdd0e5a1e7c549ab008f123cf3a545ebe74fa7bdf25032188f5" exitCode=0 Feb 17 00:13:50 crc kubenswrapper[5109]: I0217 00:13:50.523548 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29521440-n967f" event={"ID":"7a08715e-e52f-4251-9b13-72f93eacb031","Type":"ContainerDied","Data":"798b9f619c469cdd0e5a1e7c549ab008f123cf3a545ebe74fa7bdf25032188f5"} Feb 17 00:13:51 crc kubenswrapper[5109]: I0217 00:13:51.804768 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29521440-n967f" Feb 17 00:13:51 crc kubenswrapper[5109]: I0217 00:13:51.990012 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg2wc\" (UniqueName: \"kubernetes.io/projected/7a08715e-e52f-4251-9b13-72f93eacb031-kube-api-access-tg2wc\") pod \"7a08715e-e52f-4251-9b13-72f93eacb031\" (UID: \"7a08715e-e52f-4251-9b13-72f93eacb031\") " Feb 17 00:13:51 crc kubenswrapper[5109]: I0217 00:13:51.990210 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7a08715e-e52f-4251-9b13-72f93eacb031-serviceca\") pod \"7a08715e-e52f-4251-9b13-72f93eacb031\" (UID: \"7a08715e-e52f-4251-9b13-72f93eacb031\") " Feb 17 00:13:51 crc kubenswrapper[5109]: I0217 00:13:51.991927 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a08715e-e52f-4251-9b13-72f93eacb031-serviceca" (OuterVolumeSpecName: "serviceca") pod "7a08715e-e52f-4251-9b13-72f93eacb031" (UID: "7a08715e-e52f-4251-9b13-72f93eacb031"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:13:51 crc kubenswrapper[5109]: I0217 00:13:51.992735 5109 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7a08715e-e52f-4251-9b13-72f93eacb031-serviceca\") on node \"crc\" DevicePath \"\"" Feb 17 00:13:52 crc kubenswrapper[5109]: I0217 00:13:52.004333 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a08715e-e52f-4251-9b13-72f93eacb031-kube-api-access-tg2wc" (OuterVolumeSpecName: "kube-api-access-tg2wc") pod "7a08715e-e52f-4251-9b13-72f93eacb031" (UID: "7a08715e-e52f-4251-9b13-72f93eacb031"). InnerVolumeSpecName "kube-api-access-tg2wc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:13:52 crc kubenswrapper[5109]: I0217 00:13:52.094277 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tg2wc\" (UniqueName: \"kubernetes.io/projected/7a08715e-e52f-4251-9b13-72f93eacb031-kube-api-access-tg2wc\") on node \"crc\" DevicePath \"\"" Feb 17 00:13:52 crc kubenswrapper[5109]: I0217 00:13:52.539468 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29521440-n967f" event={"ID":"7a08715e-e52f-4251-9b13-72f93eacb031","Type":"ContainerDied","Data":"63cef07e628c2987560bd0d73c3d34f12066effddf1ca9d0fc458f21b8b28fda"} Feb 17 00:13:52 crc kubenswrapper[5109]: I0217 00:13:52.539532 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63cef07e628c2987560bd0d73c3d34f12066effddf1ca9d0fc458f21b8b28fda" Feb 17 00:13:52 crc kubenswrapper[5109]: I0217 00:13:52.539759 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29521440-n967f" Feb 17 00:14:00 crc kubenswrapper[5109]: I0217 00:14:00.799874 5109 patch_prober.go:28] interesting pod/machine-config-daemon-hjvm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:14:00 crc kubenswrapper[5109]: I0217 00:14:00.800538 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" podUID="5867f26a-eddd-4d0b-bfa3-e7c68e976330" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:14:00 crc kubenswrapper[5109]: I0217 00:14:00.800667 5109 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" Feb 17 00:14:00 crc kubenswrapper[5109]: I0217 00:14:00.801725 5109 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7981733834e5113824e0605b264ad8ffcb2706e3cea14ef7eaf54cb2b20e2859"} pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 00:14:00 crc kubenswrapper[5109]: I0217 00:14:00.801836 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" podUID="5867f26a-eddd-4d0b-bfa3-e7c68e976330" containerName="machine-config-daemon" containerID="cri-o://7981733834e5113824e0605b264ad8ffcb2706e3cea14ef7eaf54cb2b20e2859" gracePeriod=600 Feb 17 00:14:00 crc kubenswrapper[5109]: I0217 00:14:00.971037 5109 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 00:14:01 crc kubenswrapper[5109]: I0217 00:14:01.604544 5109 generic.go:358] "Generic (PLEG): container finished" podID="5867f26a-eddd-4d0b-bfa3-e7c68e976330" containerID="7981733834e5113824e0605b264ad8ffcb2706e3cea14ef7eaf54cb2b20e2859" exitCode=0 Feb 17 00:14:01 crc kubenswrapper[5109]: I0217 00:14:01.604676 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" event={"ID":"5867f26a-eddd-4d0b-bfa3-e7c68e976330","Type":"ContainerDied","Data":"7981733834e5113824e0605b264ad8ffcb2706e3cea14ef7eaf54cb2b20e2859"} Feb 17 00:14:01 crc kubenswrapper[5109]: I0217 00:14:01.604999 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" event={"ID":"5867f26a-eddd-4d0b-bfa3-e7c68e976330","Type":"ContainerStarted","Data":"ca47888da47c26cab83fcf442e46e7728fd6fee6d192dec983630c8c66aeda36"} Feb 17 00:14:14 crc kubenswrapper[5109]: I0217 00:14:14.625017 5109 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.060090 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vjl8l"] Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.063305 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vjl8l" podUID="74a26206-1199-4cf4-912a-fa5e03a96713" containerName="registry-server" containerID="cri-o://a38658af431554da184cd4044de0c8e9f3b84c8849fe0688ebbc0b70294027fd" gracePeriod=30 Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.068401 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-78v77"] Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.069144 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-78v77" podUID="7386e430-c5f0-467b-9375-4eab8c181f1b" containerName="registry-server" containerID="cri-o://01f7e9ac5bed012896e9dd099d0f77b71d55a43eca63bd6c998e45728b196504" gracePeriod=30 Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.079911 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-8f27m"] Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.080238 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-8f27m" podUID="a7fdfe97-5098-4468-92b7-881bc4270004" containerName="marketplace-operator" containerID="cri-o://8c2cf46d5b37c8f323151d39d4cbb929072c2c6b9074c801d279215badae10c8" gracePeriod=30 Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.097674 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5p9cp"] Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.097962 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5p9cp" podUID="fb9e5613-b1ca-483f-8efe-7c150933934b" containerName="registry-server" containerID="cri-o://2d37accd3c89b9d8afa5dbd5af706a6868e7207d47541628fd1d4180d1155ad1" gracePeriod=30 Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.108938 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-z95j7"] Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.109949 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7a08715e-e52f-4251-9b13-72f93eacb031" containerName="image-pruner" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.109978 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a08715e-e52f-4251-9b13-72f93eacb031" containerName="image-pruner" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.110007 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.110019 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.110170 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.110192 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="7a08715e-e52f-4251-9b13-72f93eacb031" containerName="image-pruner" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.121037 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-trn6m"] Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.121497 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-trn6m" podUID="387ec0da-dcb1-4001-8439-5793c9384015" containerName="registry-server" containerID="cri-o://8e5b77d8030e3f96b8c9a75032ab8dec165d2fc5401ffbf73340136ec1e070f4" gracePeriod=30 Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.121884 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-z95j7" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.124831 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-z95j7"] Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.144026 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-78v77" podUID="7386e430-c5f0-467b-9375-4eab8c181f1b" containerName="registry-server" probeResult="failure" output="" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.251395 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a5c18790-37ee-408b-98c6-feaedc47c512-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-z95j7\" (UID: \"a5c18790-37ee-408b-98c6-feaedc47c512\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z95j7" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.251470 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a5c18790-37ee-408b-98c6-feaedc47c512-tmp\") pod \"marketplace-operator-547dbd544d-z95j7\" (UID: \"a5c18790-37ee-408b-98c6-feaedc47c512\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z95j7" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.251513 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a5c18790-37ee-408b-98c6-feaedc47c512-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-z95j7\" (UID: \"a5c18790-37ee-408b-98c6-feaedc47c512\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z95j7" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.251561 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpsmc\" (UniqueName: \"kubernetes.io/projected/a5c18790-37ee-408b-98c6-feaedc47c512-kube-api-access-zpsmc\") pod \"marketplace-operator-547dbd544d-z95j7\" (UID: \"a5c18790-37ee-408b-98c6-feaedc47c512\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z95j7" Feb 17 00:14:55 crc kubenswrapper[5109]: E0217 00:14:55.264115 5109 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a38658af431554da184cd4044de0c8e9f3b84c8849fe0688ebbc0b70294027fd is running failed: container process not found" containerID="a38658af431554da184cd4044de0c8e9f3b84c8849fe0688ebbc0b70294027fd" cmd=["grpc_health_probe","-addr=:50051"] Feb 17 00:14:55 crc kubenswrapper[5109]: E0217 00:14:55.264539 5109 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a38658af431554da184cd4044de0c8e9f3b84c8849fe0688ebbc0b70294027fd is running failed: container process not found" containerID="a38658af431554da184cd4044de0c8e9f3b84c8849fe0688ebbc0b70294027fd" cmd=["grpc_health_probe","-addr=:50051"] Feb 17 00:14:55 crc kubenswrapper[5109]: E0217 00:14:55.264574 5109 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 01f7e9ac5bed012896e9dd099d0f77b71d55a43eca63bd6c998e45728b196504 is running failed: container process not found" containerID="01f7e9ac5bed012896e9dd099d0f77b71d55a43eca63bd6c998e45728b196504" cmd=["grpc_health_probe","-addr=:50051"] Feb 17 00:14:55 crc kubenswrapper[5109]: E0217 00:14:55.266881 5109 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a38658af431554da184cd4044de0c8e9f3b84c8849fe0688ebbc0b70294027fd is running failed: container process not found" containerID="a38658af431554da184cd4044de0c8e9f3b84c8849fe0688ebbc0b70294027fd" cmd=["grpc_health_probe","-addr=:50051"] Feb 17 00:14:55 crc kubenswrapper[5109]: E0217 00:14:55.266918 5109 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a38658af431554da184cd4044de0c8e9f3b84c8849fe0688ebbc0b70294027fd is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-vjl8l" podUID="74a26206-1199-4cf4-912a-fa5e03a96713" containerName="registry-server" probeResult="unknown" Feb 17 00:14:55 crc kubenswrapper[5109]: E0217 00:14:55.266963 5109 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 01f7e9ac5bed012896e9dd099d0f77b71d55a43eca63bd6c998e45728b196504 is running failed: container process not found" containerID="01f7e9ac5bed012896e9dd099d0f77b71d55a43eca63bd6c998e45728b196504" cmd=["grpc_health_probe","-addr=:50051"] Feb 17 00:14:55 crc kubenswrapper[5109]: E0217 00:14:55.271000 5109 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 01f7e9ac5bed012896e9dd099d0f77b71d55a43eca63bd6c998e45728b196504 is running failed: container process not found" containerID="01f7e9ac5bed012896e9dd099d0f77b71d55a43eca63bd6c998e45728b196504" cmd=["grpc_health_probe","-addr=:50051"] Feb 17 00:14:55 crc kubenswrapper[5109]: E0217 00:14:55.271064 5109 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 01f7e9ac5bed012896e9dd099d0f77b71d55a43eca63bd6c998e45728b196504 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-78v77" podUID="7386e430-c5f0-467b-9375-4eab8c181f1b" containerName="registry-server" probeResult="unknown" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.354225 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a5c18790-37ee-408b-98c6-feaedc47c512-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-z95j7\" (UID: \"a5c18790-37ee-408b-98c6-feaedc47c512\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z95j7" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.354313 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a5c18790-37ee-408b-98c6-feaedc47c512-tmp\") pod \"marketplace-operator-547dbd544d-z95j7\" (UID: \"a5c18790-37ee-408b-98c6-feaedc47c512\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z95j7" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.354346 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a5c18790-37ee-408b-98c6-feaedc47c512-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-z95j7\" (UID: \"a5c18790-37ee-408b-98c6-feaedc47c512\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z95j7" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.354381 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zpsmc\" (UniqueName: \"kubernetes.io/projected/a5c18790-37ee-408b-98c6-feaedc47c512-kube-api-access-zpsmc\") pod \"marketplace-operator-547dbd544d-z95j7\" (UID: \"a5c18790-37ee-408b-98c6-feaedc47c512\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z95j7" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.355572 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a5c18790-37ee-408b-98c6-feaedc47c512-tmp\") pod \"marketplace-operator-547dbd544d-z95j7\" (UID: \"a5c18790-37ee-408b-98c6-feaedc47c512\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z95j7" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.358049 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a5c18790-37ee-408b-98c6-feaedc47c512-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-z95j7\" (UID: \"a5c18790-37ee-408b-98c6-feaedc47c512\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z95j7" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.363205 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a5c18790-37ee-408b-98c6-feaedc47c512-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-z95j7\" (UID: \"a5c18790-37ee-408b-98c6-feaedc47c512\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z95j7" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.369696 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpsmc\" (UniqueName: \"kubernetes.io/projected/a5c18790-37ee-408b-98c6-feaedc47c512-kube-api-access-zpsmc\") pod \"marketplace-operator-547dbd544d-z95j7\" (UID: \"a5c18790-37ee-408b-98c6-feaedc47c512\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-z95j7" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.506565 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-z95j7" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.516140 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vjl8l" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.516557 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-8f27m" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.524652 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-78v77" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.532485 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-trn6m" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.567578 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5p9cp" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.660511 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a26206-1199-4cf4-912a-fa5e03a96713-utilities\") pod \"74a26206-1199-4cf4-912a-fa5e03a96713\" (UID: \"74a26206-1199-4cf4-912a-fa5e03a96713\") " Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.660557 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb9e5613-b1ca-483f-8efe-7c150933934b-utilities\") pod \"fb9e5613-b1ca-483f-8efe-7c150933934b\" (UID: \"fb9e5613-b1ca-483f-8efe-7c150933934b\") " Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.660604 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a7fdfe97-5098-4468-92b7-881bc4270004-marketplace-operator-metrics\") pod \"a7fdfe97-5098-4468-92b7-881bc4270004\" (UID: \"a7fdfe97-5098-4468-92b7-881bc4270004\") " Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.660640 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t4hv\" (UniqueName: \"kubernetes.io/projected/387ec0da-dcb1-4001-8439-5793c9384015-kube-api-access-6t4hv\") pod \"387ec0da-dcb1-4001-8439-5793c9384015\" (UID: \"387ec0da-dcb1-4001-8439-5793c9384015\") " Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.660665 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a26206-1199-4cf4-912a-fa5e03a96713-catalog-content\") pod \"74a26206-1199-4cf4-912a-fa5e03a96713\" (UID: \"74a26206-1199-4cf4-912a-fa5e03a96713\") " Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.660713 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/387ec0da-dcb1-4001-8439-5793c9384015-catalog-content\") pod \"387ec0da-dcb1-4001-8439-5793c9384015\" (UID: \"387ec0da-dcb1-4001-8439-5793c9384015\") " Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.660746 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6cgl\" (UniqueName: \"kubernetes.io/projected/a7fdfe97-5098-4468-92b7-881bc4270004-kube-api-access-v6cgl\") pod \"a7fdfe97-5098-4468-92b7-881bc4270004\" (UID: \"a7fdfe97-5098-4468-92b7-881bc4270004\") " Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.660805 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a7fdfe97-5098-4468-92b7-881bc4270004-marketplace-trusted-ca\") pod \"a7fdfe97-5098-4468-92b7-881bc4270004\" (UID: \"a7fdfe97-5098-4468-92b7-881bc4270004\") " Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.660829 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r9x4\" (UniqueName: \"kubernetes.io/projected/74a26206-1199-4cf4-912a-fa5e03a96713-kube-api-access-2r9x4\") pod \"74a26206-1199-4cf4-912a-fa5e03a96713\" (UID: \"74a26206-1199-4cf4-912a-fa5e03a96713\") " Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.660876 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7386e430-c5f0-467b-9375-4eab8c181f1b-catalog-content\") pod \"7386e430-c5f0-467b-9375-4eab8c181f1b\" (UID: \"7386e430-c5f0-467b-9375-4eab8c181f1b\") " Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.660945 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7386e430-c5f0-467b-9375-4eab8c181f1b-utilities\") pod \"7386e430-c5f0-467b-9375-4eab8c181f1b\" (UID: \"7386e430-c5f0-467b-9375-4eab8c181f1b\") " Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.660972 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/387ec0da-dcb1-4001-8439-5793c9384015-utilities\") pod \"387ec0da-dcb1-4001-8439-5793c9384015\" (UID: \"387ec0da-dcb1-4001-8439-5793c9384015\") " Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.661031 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dnzv\" (UniqueName: \"kubernetes.io/projected/7386e430-c5f0-467b-9375-4eab8c181f1b-kube-api-access-4dnzv\") pod \"7386e430-c5f0-467b-9375-4eab8c181f1b\" (UID: \"7386e430-c5f0-467b-9375-4eab8c181f1b\") " Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.661110 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2gxz\" (UniqueName: \"kubernetes.io/projected/fb9e5613-b1ca-483f-8efe-7c150933934b-kube-api-access-b2gxz\") pod \"fb9e5613-b1ca-483f-8efe-7c150933934b\" (UID: \"fb9e5613-b1ca-483f-8efe-7c150933934b\") " Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.661138 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb9e5613-b1ca-483f-8efe-7c150933934b-catalog-content\") pod \"fb9e5613-b1ca-483f-8efe-7c150933934b\" (UID: \"fb9e5613-b1ca-483f-8efe-7c150933934b\") " Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.661160 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a7fdfe97-5098-4468-92b7-881bc4270004-tmp\") pod \"a7fdfe97-5098-4468-92b7-881bc4270004\" (UID: \"a7fdfe97-5098-4468-92b7-881bc4270004\") " Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.661677 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb9e5613-b1ca-483f-8efe-7c150933934b-utilities" (OuterVolumeSpecName: "utilities") pod "fb9e5613-b1ca-483f-8efe-7c150933934b" (UID: "fb9e5613-b1ca-483f-8efe-7c150933934b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.661722 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74a26206-1199-4cf4-912a-fa5e03a96713-utilities" (OuterVolumeSpecName: "utilities") pod "74a26206-1199-4cf4-912a-fa5e03a96713" (UID: "74a26206-1199-4cf4-912a-fa5e03a96713"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.662049 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7fdfe97-5098-4468-92b7-881bc4270004-tmp" (OuterVolumeSpecName: "tmp") pod "a7fdfe97-5098-4468-92b7-881bc4270004" (UID: "a7fdfe97-5098-4468-92b7-881bc4270004"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.662916 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7386e430-c5f0-467b-9375-4eab8c181f1b-utilities" (OuterVolumeSpecName: "utilities") pod "7386e430-c5f0-467b-9375-4eab8c181f1b" (UID: "7386e430-c5f0-467b-9375-4eab8c181f1b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.665865 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7386e430-c5f0-467b-9375-4eab8c181f1b-kube-api-access-4dnzv" (OuterVolumeSpecName: "kube-api-access-4dnzv") pod "7386e430-c5f0-467b-9375-4eab8c181f1b" (UID: "7386e430-c5f0-467b-9375-4eab8c181f1b"). InnerVolumeSpecName "kube-api-access-4dnzv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.666022 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7fdfe97-5098-4468-92b7-881bc4270004-kube-api-access-v6cgl" (OuterVolumeSpecName: "kube-api-access-v6cgl") pod "a7fdfe97-5098-4468-92b7-881bc4270004" (UID: "a7fdfe97-5098-4468-92b7-881bc4270004"). InnerVolumeSpecName "kube-api-access-v6cgl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.666995 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/387ec0da-dcb1-4001-8439-5793c9384015-utilities" (OuterVolumeSpecName: "utilities") pod "387ec0da-dcb1-4001-8439-5793c9384015" (UID: "387ec0da-dcb1-4001-8439-5793c9384015"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.667283 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/387ec0da-dcb1-4001-8439-5793c9384015-kube-api-access-6t4hv" (OuterVolumeSpecName: "kube-api-access-6t4hv") pod "387ec0da-dcb1-4001-8439-5793c9384015" (UID: "387ec0da-dcb1-4001-8439-5793c9384015"). InnerVolumeSpecName "kube-api-access-6t4hv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.668121 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7fdfe97-5098-4468-92b7-881bc4270004-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "a7fdfe97-5098-4468-92b7-881bc4270004" (UID: "a7fdfe97-5098-4468-92b7-881bc4270004"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.669828 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb9e5613-b1ca-483f-8efe-7c150933934b-kube-api-access-b2gxz" (OuterVolumeSpecName: "kube-api-access-b2gxz") pod "fb9e5613-b1ca-483f-8efe-7c150933934b" (UID: "fb9e5613-b1ca-483f-8efe-7c150933934b"). InnerVolumeSpecName "kube-api-access-b2gxz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.670081 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7fdfe97-5098-4468-92b7-881bc4270004-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "a7fdfe97-5098-4468-92b7-881bc4270004" (UID: "a7fdfe97-5098-4468-92b7-881bc4270004"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.677980 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74a26206-1199-4cf4-912a-fa5e03a96713-kube-api-access-2r9x4" (OuterVolumeSpecName: "kube-api-access-2r9x4") pod "74a26206-1199-4cf4-912a-fa5e03a96713" (UID: "74a26206-1199-4cf4-912a-fa5e03a96713"). InnerVolumeSpecName "kube-api-access-2r9x4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.710872 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74a26206-1199-4cf4-912a-fa5e03a96713-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74a26206-1199-4cf4-912a-fa5e03a96713" (UID: "74a26206-1199-4cf4-912a-fa5e03a96713"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.711243 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb9e5613-b1ca-483f-8efe-7c150933934b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb9e5613-b1ca-483f-8efe-7c150933934b" (UID: "fb9e5613-b1ca-483f-8efe-7c150933934b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.729824 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7386e430-c5f0-467b-9375-4eab8c181f1b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7386e430-c5f0-467b-9375-4eab8c181f1b" (UID: "7386e430-c5f0-467b-9375-4eab8c181f1b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.753273 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-z95j7"] Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.762655 5109 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a7fdfe97-5098-4468-92b7-881bc4270004-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.762687 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2r9x4\" (UniqueName: \"kubernetes.io/projected/74a26206-1199-4cf4-912a-fa5e03a96713-kube-api-access-2r9x4\") on node \"crc\" DevicePath \"\"" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.762697 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7386e430-c5f0-467b-9375-4eab8c181f1b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.762708 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7386e430-c5f0-467b-9375-4eab8c181f1b-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.762718 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/387ec0da-dcb1-4001-8439-5793c9384015-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.762727 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4dnzv\" (UniqueName: \"kubernetes.io/projected/7386e430-c5f0-467b-9375-4eab8c181f1b-kube-api-access-4dnzv\") on node \"crc\" DevicePath \"\"" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.762735 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b2gxz\" (UniqueName: \"kubernetes.io/projected/fb9e5613-b1ca-483f-8efe-7c150933934b-kube-api-access-b2gxz\") on node \"crc\" DevicePath \"\"" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.762743 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb9e5613-b1ca-483f-8efe-7c150933934b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.762751 5109 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a7fdfe97-5098-4468-92b7-881bc4270004-tmp\") on node \"crc\" DevicePath \"\"" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.762760 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a26206-1199-4cf4-912a-fa5e03a96713-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.762767 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb9e5613-b1ca-483f-8efe-7c150933934b-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.762775 5109 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a7fdfe97-5098-4468-92b7-881bc4270004-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.762784 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6t4hv\" (UniqueName: \"kubernetes.io/projected/387ec0da-dcb1-4001-8439-5793c9384015-kube-api-access-6t4hv\") on node \"crc\" DevicePath \"\"" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.762792 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a26206-1199-4cf4-912a-fa5e03a96713-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.762800 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v6cgl\" (UniqueName: \"kubernetes.io/projected/a7fdfe97-5098-4468-92b7-881bc4270004-kube-api-access-v6cgl\") on node \"crc\" DevicePath \"\"" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.782944 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/387ec0da-dcb1-4001-8439-5793c9384015-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "387ec0da-dcb1-4001-8439-5793c9384015" (UID: "387ec0da-dcb1-4001-8439-5793c9384015"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.863586 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/387ec0da-dcb1-4001-8439-5793c9384015-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.952997 5109 generic.go:358] "Generic (PLEG): container finished" podID="387ec0da-dcb1-4001-8439-5793c9384015" containerID="8e5b77d8030e3f96b8c9a75032ab8dec165d2fc5401ffbf73340136ec1e070f4" exitCode=0 Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.953051 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-trn6m" event={"ID":"387ec0da-dcb1-4001-8439-5793c9384015","Type":"ContainerDied","Data":"8e5b77d8030e3f96b8c9a75032ab8dec165d2fc5401ffbf73340136ec1e070f4"} Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.953421 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-trn6m" event={"ID":"387ec0da-dcb1-4001-8439-5793c9384015","Type":"ContainerDied","Data":"22e3b626d9e9a74ab4e62f5d417bea887ce782ab4d43508c1071360494fd1212"} Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.953064 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-trn6m" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.953446 5109 scope.go:117] "RemoveContainer" containerID="8e5b77d8030e3f96b8c9a75032ab8dec165d2fc5401ffbf73340136ec1e070f4" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.955693 5109 generic.go:358] "Generic (PLEG): container finished" podID="fb9e5613-b1ca-483f-8efe-7c150933934b" containerID="2d37accd3c89b9d8afa5dbd5af706a6868e7207d47541628fd1d4180d1155ad1" exitCode=0 Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.955826 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5p9cp" event={"ID":"fb9e5613-b1ca-483f-8efe-7c150933934b","Type":"ContainerDied","Data":"2d37accd3c89b9d8afa5dbd5af706a6868e7207d47541628fd1d4180d1155ad1"} Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.955871 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5p9cp" event={"ID":"fb9e5613-b1ca-483f-8efe-7c150933934b","Type":"ContainerDied","Data":"46353b6f310473e4d7470a9ece43f2cf920d3fc1d91bb88d3020c9aed533ce0c"} Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.955993 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5p9cp" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.960026 5109 generic.go:358] "Generic (PLEG): container finished" podID="7386e430-c5f0-467b-9375-4eab8c181f1b" containerID="01f7e9ac5bed012896e9dd099d0f77b71d55a43eca63bd6c998e45728b196504" exitCode=0 Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.960103 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-78v77" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.960109 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78v77" event={"ID":"7386e430-c5f0-467b-9375-4eab8c181f1b","Type":"ContainerDied","Data":"01f7e9ac5bed012896e9dd099d0f77b71d55a43eca63bd6c998e45728b196504"} Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.960134 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78v77" event={"ID":"7386e430-c5f0-467b-9375-4eab8c181f1b","Type":"ContainerDied","Data":"4acca9059eb8174ce0c9145617dccc092ebcc6dc5c15c86185c92bc734f36d2c"} Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.961452 5109 generic.go:358] "Generic (PLEG): container finished" podID="a7fdfe97-5098-4468-92b7-881bc4270004" containerID="8c2cf46d5b37c8f323151d39d4cbb929072c2c6b9074c801d279215badae10c8" exitCode=0 Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.961557 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-8f27m" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.961567 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-8f27m" event={"ID":"a7fdfe97-5098-4468-92b7-881bc4270004","Type":"ContainerDied","Data":"8c2cf46d5b37c8f323151d39d4cbb929072c2c6b9074c801d279215badae10c8"} Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.961650 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-8f27m" event={"ID":"a7fdfe97-5098-4468-92b7-881bc4270004","Type":"ContainerDied","Data":"f01be9ad6071723c2499fd4ad7c3dc942e340d35295bec66d4645d98b7cabf68"} Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.969494 5109 generic.go:358] "Generic (PLEG): container finished" podID="74a26206-1199-4cf4-912a-fa5e03a96713" containerID="a38658af431554da184cd4044de0c8e9f3b84c8849fe0688ebbc0b70294027fd" exitCode=0 Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.969662 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjl8l" event={"ID":"74a26206-1199-4cf4-912a-fa5e03a96713","Type":"ContainerDied","Data":"a38658af431554da184cd4044de0c8e9f3b84c8849fe0688ebbc0b70294027fd"} Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.969689 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjl8l" event={"ID":"74a26206-1199-4cf4-912a-fa5e03a96713","Type":"ContainerDied","Data":"bb45d32b2e8e19508d1dd79e4e46edbb6cba567a24a26228bc4ec2223fb31ae8"} Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.969761 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vjl8l" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.974118 5109 scope.go:117] "RemoveContainer" containerID="9ee6419b3b1a54748b4b4aec10e02a078b9c420db9572b04e08eeac98f9365a8" Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.976719 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-z95j7" event={"ID":"a5c18790-37ee-408b-98c6-feaedc47c512","Type":"ContainerStarted","Data":"aecb80966b85cd2b31a7b7c712866d8e36a883d2fe37dfcc00a2e668cee85821"} Feb 17 00:14:55 crc kubenswrapper[5109]: I0217 00:14:55.976758 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-z95j7" event={"ID":"a5c18790-37ee-408b-98c6-feaedc47c512","Type":"ContainerStarted","Data":"8347647dce86aa02f06217e8e8f500bf35fdd68f4667fe04d9085e7b0f85c030"} Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.009141 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-z95j7" podStartSLOduration=1.009124575 podStartE2EDuration="1.009124575s" podCreationTimestamp="2026-02-17 00:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:14:56.0085335 +0000 UTC m=+367.340088268" watchObservedRunningTime="2026-02-17 00:14:56.009124575 +0000 UTC m=+367.340679343" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.027780 5109 scope.go:117] "RemoveContainer" containerID="3c581796a346f34f21ebfd7dcd5a563cbd175576364e1c42f627150776a079dc" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.038695 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-trn6m"] Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.050375 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-trn6m"] Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.057649 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-8f27m"] Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.064137 5109 scope.go:117] "RemoveContainer" containerID="8e5b77d8030e3f96b8c9a75032ab8dec165d2fc5401ffbf73340136ec1e070f4" Feb 17 00:14:56 crc kubenswrapper[5109]: E0217 00:14:56.065543 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e5b77d8030e3f96b8c9a75032ab8dec165d2fc5401ffbf73340136ec1e070f4\": container with ID starting with 8e5b77d8030e3f96b8c9a75032ab8dec165d2fc5401ffbf73340136ec1e070f4 not found: ID does not exist" containerID="8e5b77d8030e3f96b8c9a75032ab8dec165d2fc5401ffbf73340136ec1e070f4" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.065603 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e5b77d8030e3f96b8c9a75032ab8dec165d2fc5401ffbf73340136ec1e070f4"} err="failed to get container status \"8e5b77d8030e3f96b8c9a75032ab8dec165d2fc5401ffbf73340136ec1e070f4\": rpc error: code = NotFound desc = could not find container \"8e5b77d8030e3f96b8c9a75032ab8dec165d2fc5401ffbf73340136ec1e070f4\": container with ID starting with 8e5b77d8030e3f96b8c9a75032ab8dec165d2fc5401ffbf73340136ec1e070f4 not found: ID does not exist" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.065631 5109 scope.go:117] "RemoveContainer" containerID="9ee6419b3b1a54748b4b4aec10e02a078b9c420db9572b04e08eeac98f9365a8" Feb 17 00:14:56 crc kubenswrapper[5109]: E0217 00:14:56.066031 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ee6419b3b1a54748b4b4aec10e02a078b9c420db9572b04e08eeac98f9365a8\": container with ID starting with 9ee6419b3b1a54748b4b4aec10e02a078b9c420db9572b04e08eeac98f9365a8 not found: ID does not exist" containerID="9ee6419b3b1a54748b4b4aec10e02a078b9c420db9572b04e08eeac98f9365a8" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.066054 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ee6419b3b1a54748b4b4aec10e02a078b9c420db9572b04e08eeac98f9365a8"} err="failed to get container status \"9ee6419b3b1a54748b4b4aec10e02a078b9c420db9572b04e08eeac98f9365a8\": rpc error: code = NotFound desc = could not find container \"9ee6419b3b1a54748b4b4aec10e02a078b9c420db9572b04e08eeac98f9365a8\": container with ID starting with 9ee6419b3b1a54748b4b4aec10e02a078b9c420db9572b04e08eeac98f9365a8 not found: ID does not exist" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.066070 5109 scope.go:117] "RemoveContainer" containerID="3c581796a346f34f21ebfd7dcd5a563cbd175576364e1c42f627150776a079dc" Feb 17 00:14:56 crc kubenswrapper[5109]: E0217 00:14:56.067640 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c581796a346f34f21ebfd7dcd5a563cbd175576364e1c42f627150776a079dc\": container with ID starting with 3c581796a346f34f21ebfd7dcd5a563cbd175576364e1c42f627150776a079dc not found: ID does not exist" containerID="3c581796a346f34f21ebfd7dcd5a563cbd175576364e1c42f627150776a079dc" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.067675 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c581796a346f34f21ebfd7dcd5a563cbd175576364e1c42f627150776a079dc"} err="failed to get container status \"3c581796a346f34f21ebfd7dcd5a563cbd175576364e1c42f627150776a079dc\": rpc error: code = NotFound desc = could not find container \"3c581796a346f34f21ebfd7dcd5a563cbd175576364e1c42f627150776a079dc\": container with ID starting with 3c581796a346f34f21ebfd7dcd5a563cbd175576364e1c42f627150776a079dc not found: ID does not exist" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.067695 5109 scope.go:117] "RemoveContainer" containerID="2d37accd3c89b9d8afa5dbd5af706a6868e7207d47541628fd1d4180d1155ad1" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.075674 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-8f27m"] Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.083167 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-78v77"] Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.087879 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-78v77"] Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.088364 5109 scope.go:117] "RemoveContainer" containerID="1289bb082ddb806a571ae4df1a32420472bc43e605465c13072a140e07842e61" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.092787 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vjl8l"] Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.096077 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vjl8l"] Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.100638 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5p9cp"] Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.104338 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5p9cp"] Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.108694 5109 scope.go:117] "RemoveContainer" containerID="84ef8c1df9aa4296a5fc3aa0d01d363836fa181d36ae0b735b7de2e644789027" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.121264 5109 scope.go:117] "RemoveContainer" containerID="2d37accd3c89b9d8afa5dbd5af706a6868e7207d47541628fd1d4180d1155ad1" Feb 17 00:14:56 crc kubenswrapper[5109]: E0217 00:14:56.122203 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d37accd3c89b9d8afa5dbd5af706a6868e7207d47541628fd1d4180d1155ad1\": container with ID starting with 2d37accd3c89b9d8afa5dbd5af706a6868e7207d47541628fd1d4180d1155ad1 not found: ID does not exist" containerID="2d37accd3c89b9d8afa5dbd5af706a6868e7207d47541628fd1d4180d1155ad1" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.122231 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d37accd3c89b9d8afa5dbd5af706a6868e7207d47541628fd1d4180d1155ad1"} err="failed to get container status \"2d37accd3c89b9d8afa5dbd5af706a6868e7207d47541628fd1d4180d1155ad1\": rpc error: code = NotFound desc = could not find container \"2d37accd3c89b9d8afa5dbd5af706a6868e7207d47541628fd1d4180d1155ad1\": container with ID starting with 2d37accd3c89b9d8afa5dbd5af706a6868e7207d47541628fd1d4180d1155ad1 not found: ID does not exist" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.122251 5109 scope.go:117] "RemoveContainer" containerID="1289bb082ddb806a571ae4df1a32420472bc43e605465c13072a140e07842e61" Feb 17 00:14:56 crc kubenswrapper[5109]: E0217 00:14:56.122483 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1289bb082ddb806a571ae4df1a32420472bc43e605465c13072a140e07842e61\": container with ID starting with 1289bb082ddb806a571ae4df1a32420472bc43e605465c13072a140e07842e61 not found: ID does not exist" containerID="1289bb082ddb806a571ae4df1a32420472bc43e605465c13072a140e07842e61" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.122502 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1289bb082ddb806a571ae4df1a32420472bc43e605465c13072a140e07842e61"} err="failed to get container status \"1289bb082ddb806a571ae4df1a32420472bc43e605465c13072a140e07842e61\": rpc error: code = NotFound desc = could not find container \"1289bb082ddb806a571ae4df1a32420472bc43e605465c13072a140e07842e61\": container with ID starting with 1289bb082ddb806a571ae4df1a32420472bc43e605465c13072a140e07842e61 not found: ID does not exist" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.122515 5109 scope.go:117] "RemoveContainer" containerID="84ef8c1df9aa4296a5fc3aa0d01d363836fa181d36ae0b735b7de2e644789027" Feb 17 00:14:56 crc kubenswrapper[5109]: E0217 00:14:56.122727 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84ef8c1df9aa4296a5fc3aa0d01d363836fa181d36ae0b735b7de2e644789027\": container with ID starting with 84ef8c1df9aa4296a5fc3aa0d01d363836fa181d36ae0b735b7de2e644789027 not found: ID does not exist" containerID="84ef8c1df9aa4296a5fc3aa0d01d363836fa181d36ae0b735b7de2e644789027" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.122747 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84ef8c1df9aa4296a5fc3aa0d01d363836fa181d36ae0b735b7de2e644789027"} err="failed to get container status \"84ef8c1df9aa4296a5fc3aa0d01d363836fa181d36ae0b735b7de2e644789027\": rpc error: code = NotFound desc = could not find container \"84ef8c1df9aa4296a5fc3aa0d01d363836fa181d36ae0b735b7de2e644789027\": container with ID starting with 84ef8c1df9aa4296a5fc3aa0d01d363836fa181d36ae0b735b7de2e644789027 not found: ID does not exist" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.122760 5109 scope.go:117] "RemoveContainer" containerID="01f7e9ac5bed012896e9dd099d0f77b71d55a43eca63bd6c998e45728b196504" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.140737 5109 scope.go:117] "RemoveContainer" containerID="1144a6bca5b9ab59e4de438bf4997684bb6e7cbebfeff793b4fbc2e581a0f042" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.158420 5109 scope.go:117] "RemoveContainer" containerID="c23057643e0ccf72f8e2f172c33b318f9fe9c3475d148d799353d8d3da966195" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.177352 5109 scope.go:117] "RemoveContainer" containerID="01f7e9ac5bed012896e9dd099d0f77b71d55a43eca63bd6c998e45728b196504" Feb 17 00:14:56 crc kubenswrapper[5109]: E0217 00:14:56.177749 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01f7e9ac5bed012896e9dd099d0f77b71d55a43eca63bd6c998e45728b196504\": container with ID starting with 01f7e9ac5bed012896e9dd099d0f77b71d55a43eca63bd6c998e45728b196504 not found: ID does not exist" containerID="01f7e9ac5bed012896e9dd099d0f77b71d55a43eca63bd6c998e45728b196504" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.177792 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01f7e9ac5bed012896e9dd099d0f77b71d55a43eca63bd6c998e45728b196504"} err="failed to get container status \"01f7e9ac5bed012896e9dd099d0f77b71d55a43eca63bd6c998e45728b196504\": rpc error: code = NotFound desc = could not find container \"01f7e9ac5bed012896e9dd099d0f77b71d55a43eca63bd6c998e45728b196504\": container with ID starting with 01f7e9ac5bed012896e9dd099d0f77b71d55a43eca63bd6c998e45728b196504 not found: ID does not exist" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.177818 5109 scope.go:117] "RemoveContainer" containerID="1144a6bca5b9ab59e4de438bf4997684bb6e7cbebfeff793b4fbc2e581a0f042" Feb 17 00:14:56 crc kubenswrapper[5109]: E0217 00:14:56.178257 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1144a6bca5b9ab59e4de438bf4997684bb6e7cbebfeff793b4fbc2e581a0f042\": container with ID starting with 1144a6bca5b9ab59e4de438bf4997684bb6e7cbebfeff793b4fbc2e581a0f042 not found: ID does not exist" containerID="1144a6bca5b9ab59e4de438bf4997684bb6e7cbebfeff793b4fbc2e581a0f042" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.178287 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1144a6bca5b9ab59e4de438bf4997684bb6e7cbebfeff793b4fbc2e581a0f042"} err="failed to get container status \"1144a6bca5b9ab59e4de438bf4997684bb6e7cbebfeff793b4fbc2e581a0f042\": rpc error: code = NotFound desc = could not find container \"1144a6bca5b9ab59e4de438bf4997684bb6e7cbebfeff793b4fbc2e581a0f042\": container with ID starting with 1144a6bca5b9ab59e4de438bf4997684bb6e7cbebfeff793b4fbc2e581a0f042 not found: ID does not exist" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.178308 5109 scope.go:117] "RemoveContainer" containerID="c23057643e0ccf72f8e2f172c33b318f9fe9c3475d148d799353d8d3da966195" Feb 17 00:14:56 crc kubenswrapper[5109]: E0217 00:14:56.178720 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c23057643e0ccf72f8e2f172c33b318f9fe9c3475d148d799353d8d3da966195\": container with ID starting with c23057643e0ccf72f8e2f172c33b318f9fe9c3475d148d799353d8d3da966195 not found: ID does not exist" containerID="c23057643e0ccf72f8e2f172c33b318f9fe9c3475d148d799353d8d3da966195" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.178752 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c23057643e0ccf72f8e2f172c33b318f9fe9c3475d148d799353d8d3da966195"} err="failed to get container status \"c23057643e0ccf72f8e2f172c33b318f9fe9c3475d148d799353d8d3da966195\": rpc error: code = NotFound desc = could not find container \"c23057643e0ccf72f8e2f172c33b318f9fe9c3475d148d799353d8d3da966195\": container with ID starting with c23057643e0ccf72f8e2f172c33b318f9fe9c3475d148d799353d8d3da966195 not found: ID does not exist" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.178766 5109 scope.go:117] "RemoveContainer" containerID="8c2cf46d5b37c8f323151d39d4cbb929072c2c6b9074c801d279215badae10c8" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.231927 5109 scope.go:117] "RemoveContainer" containerID="6ddcb511d3901b79452a2641dee52272af1679b1b48c4bc03a81f02273cf15c2" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.244481 5109 scope.go:117] "RemoveContainer" containerID="8c2cf46d5b37c8f323151d39d4cbb929072c2c6b9074c801d279215badae10c8" Feb 17 00:14:56 crc kubenswrapper[5109]: E0217 00:14:56.245036 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c2cf46d5b37c8f323151d39d4cbb929072c2c6b9074c801d279215badae10c8\": container with ID starting with 8c2cf46d5b37c8f323151d39d4cbb929072c2c6b9074c801d279215badae10c8 not found: ID does not exist" containerID="8c2cf46d5b37c8f323151d39d4cbb929072c2c6b9074c801d279215badae10c8" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.245070 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c2cf46d5b37c8f323151d39d4cbb929072c2c6b9074c801d279215badae10c8"} err="failed to get container status \"8c2cf46d5b37c8f323151d39d4cbb929072c2c6b9074c801d279215badae10c8\": rpc error: code = NotFound desc = could not find container \"8c2cf46d5b37c8f323151d39d4cbb929072c2c6b9074c801d279215badae10c8\": container with ID starting with 8c2cf46d5b37c8f323151d39d4cbb929072c2c6b9074c801d279215badae10c8 not found: ID does not exist" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.245093 5109 scope.go:117] "RemoveContainer" containerID="6ddcb511d3901b79452a2641dee52272af1679b1b48c4bc03a81f02273cf15c2" Feb 17 00:14:56 crc kubenswrapper[5109]: E0217 00:14:56.245424 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ddcb511d3901b79452a2641dee52272af1679b1b48c4bc03a81f02273cf15c2\": container with ID starting with 6ddcb511d3901b79452a2641dee52272af1679b1b48c4bc03a81f02273cf15c2 not found: ID does not exist" containerID="6ddcb511d3901b79452a2641dee52272af1679b1b48c4bc03a81f02273cf15c2" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.245477 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ddcb511d3901b79452a2641dee52272af1679b1b48c4bc03a81f02273cf15c2"} err="failed to get container status \"6ddcb511d3901b79452a2641dee52272af1679b1b48c4bc03a81f02273cf15c2\": rpc error: code = NotFound desc = could not find container \"6ddcb511d3901b79452a2641dee52272af1679b1b48c4bc03a81f02273cf15c2\": container with ID starting with 6ddcb511d3901b79452a2641dee52272af1679b1b48c4bc03a81f02273cf15c2 not found: ID does not exist" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.245508 5109 scope.go:117] "RemoveContainer" containerID="a38658af431554da184cd4044de0c8e9f3b84c8849fe0688ebbc0b70294027fd" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.260823 5109 scope.go:117] "RemoveContainer" containerID="a994bf7e0ce51fb7df80628053f8c9846ccc51921f6c345f2344967e8a1df789" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.274628 5109 scope.go:117] "RemoveContainer" containerID="beba75b564eb33c46b2453575646c5df63b0d59aadee0250d50c23815a59ab8c" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.290194 5109 scope.go:117] "RemoveContainer" containerID="a38658af431554da184cd4044de0c8e9f3b84c8849fe0688ebbc0b70294027fd" Feb 17 00:14:56 crc kubenswrapper[5109]: E0217 00:14:56.290639 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a38658af431554da184cd4044de0c8e9f3b84c8849fe0688ebbc0b70294027fd\": container with ID starting with a38658af431554da184cd4044de0c8e9f3b84c8849fe0688ebbc0b70294027fd not found: ID does not exist" containerID="a38658af431554da184cd4044de0c8e9f3b84c8849fe0688ebbc0b70294027fd" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.290680 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a38658af431554da184cd4044de0c8e9f3b84c8849fe0688ebbc0b70294027fd"} err="failed to get container status \"a38658af431554da184cd4044de0c8e9f3b84c8849fe0688ebbc0b70294027fd\": rpc error: code = NotFound desc = could not find container \"a38658af431554da184cd4044de0c8e9f3b84c8849fe0688ebbc0b70294027fd\": container with ID starting with a38658af431554da184cd4044de0c8e9f3b84c8849fe0688ebbc0b70294027fd not found: ID does not exist" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.290706 5109 scope.go:117] "RemoveContainer" containerID="a994bf7e0ce51fb7df80628053f8c9846ccc51921f6c345f2344967e8a1df789" Feb 17 00:14:56 crc kubenswrapper[5109]: E0217 00:14:56.291002 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a994bf7e0ce51fb7df80628053f8c9846ccc51921f6c345f2344967e8a1df789\": container with ID starting with a994bf7e0ce51fb7df80628053f8c9846ccc51921f6c345f2344967e8a1df789 not found: ID does not exist" containerID="a994bf7e0ce51fb7df80628053f8c9846ccc51921f6c345f2344967e8a1df789" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.291031 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a994bf7e0ce51fb7df80628053f8c9846ccc51921f6c345f2344967e8a1df789"} err="failed to get container status \"a994bf7e0ce51fb7df80628053f8c9846ccc51921f6c345f2344967e8a1df789\": rpc error: code = NotFound desc = could not find container \"a994bf7e0ce51fb7df80628053f8c9846ccc51921f6c345f2344967e8a1df789\": container with ID starting with a994bf7e0ce51fb7df80628053f8c9846ccc51921f6c345f2344967e8a1df789 not found: ID does not exist" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.291051 5109 scope.go:117] "RemoveContainer" containerID="beba75b564eb33c46b2453575646c5df63b0d59aadee0250d50c23815a59ab8c" Feb 17 00:14:56 crc kubenswrapper[5109]: E0217 00:14:56.291265 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"beba75b564eb33c46b2453575646c5df63b0d59aadee0250d50c23815a59ab8c\": container with ID starting with beba75b564eb33c46b2453575646c5df63b0d59aadee0250d50c23815a59ab8c not found: ID does not exist" containerID="beba75b564eb33c46b2453575646c5df63b0d59aadee0250d50c23815a59ab8c" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.291286 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"beba75b564eb33c46b2453575646c5df63b0d59aadee0250d50c23815a59ab8c"} err="failed to get container status \"beba75b564eb33c46b2453575646c5df63b0d59aadee0250d50c23815a59ab8c\": rpc error: code = NotFound desc = could not find container \"beba75b564eb33c46b2453575646c5df63b0d59aadee0250d50c23815a59ab8c\": container with ID starting with beba75b564eb33c46b2453575646c5df63b0d59aadee0250d50c23815a59ab8c not found: ID does not exist" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.359056 5109 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-8f27m container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.359124 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-8f27m" podUID="a7fdfe97-5098-4468-92b7-881bc4270004" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.995259 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-z95j7" Feb 17 00:14:56 crc kubenswrapper[5109]: I0217 00:14:56.999645 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-z95j7" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.272706 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tlkct"] Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273207 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a7fdfe97-5098-4468-92b7-881bc4270004" containerName="marketplace-operator" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273219 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7fdfe97-5098-4468-92b7-881bc4270004" containerName="marketplace-operator" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273232 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="74a26206-1199-4cf4-912a-fa5e03a96713" containerName="registry-server" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273238 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="74a26206-1199-4cf4-912a-fa5e03a96713" containerName="registry-server" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273247 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fb9e5613-b1ca-483f-8efe-7c150933934b" containerName="extract-utilities" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273253 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb9e5613-b1ca-483f-8efe-7c150933934b" containerName="extract-utilities" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273266 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fb9e5613-b1ca-483f-8efe-7c150933934b" containerName="extract-content" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273272 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb9e5613-b1ca-483f-8efe-7c150933934b" containerName="extract-content" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273279 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7386e430-c5f0-467b-9375-4eab8c181f1b" containerName="registry-server" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273284 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="7386e430-c5f0-467b-9375-4eab8c181f1b" containerName="registry-server" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273294 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="387ec0da-dcb1-4001-8439-5793c9384015" containerName="registry-server" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273299 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="387ec0da-dcb1-4001-8439-5793c9384015" containerName="registry-server" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273307 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fb9e5613-b1ca-483f-8efe-7c150933934b" containerName="registry-server" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273312 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb9e5613-b1ca-483f-8efe-7c150933934b" containerName="registry-server" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273319 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="74a26206-1199-4cf4-912a-fa5e03a96713" containerName="extract-utilities" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273324 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="74a26206-1199-4cf4-912a-fa5e03a96713" containerName="extract-utilities" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273332 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="387ec0da-dcb1-4001-8439-5793c9384015" containerName="extract-utilities" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273337 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="387ec0da-dcb1-4001-8439-5793c9384015" containerName="extract-utilities" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273343 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7386e430-c5f0-467b-9375-4eab8c181f1b" containerName="extract-content" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273348 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="7386e430-c5f0-467b-9375-4eab8c181f1b" containerName="extract-content" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273356 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7386e430-c5f0-467b-9375-4eab8c181f1b" containerName="extract-utilities" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273363 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="7386e430-c5f0-467b-9375-4eab8c181f1b" containerName="extract-utilities" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273372 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="74a26206-1199-4cf4-912a-fa5e03a96713" containerName="extract-content" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273392 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="74a26206-1199-4cf4-912a-fa5e03a96713" containerName="extract-content" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273402 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="387ec0da-dcb1-4001-8439-5793c9384015" containerName="extract-content" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273410 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="387ec0da-dcb1-4001-8439-5793c9384015" containerName="extract-content" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273485 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="a7fdfe97-5098-4468-92b7-881bc4270004" containerName="marketplace-operator" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273494 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="387ec0da-dcb1-4001-8439-5793c9384015" containerName="registry-server" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273504 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="7386e430-c5f0-467b-9375-4eab8c181f1b" containerName="registry-server" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273511 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="fb9e5613-b1ca-483f-8efe-7c150933934b" containerName="registry-server" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273521 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="74a26206-1199-4cf4-912a-fa5e03a96713" containerName="registry-server" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273624 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a7fdfe97-5098-4468-92b7-881bc4270004" containerName="marketplace-operator" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273631 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7fdfe97-5098-4468-92b7-881bc4270004" containerName="marketplace-operator" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.273720 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="a7fdfe97-5098-4468-92b7-881bc4270004" containerName="marketplace-operator" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.293698 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tlkct"] Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.293898 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tlkct" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.296389 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.387032 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6a5d27d-bc23-4df1-86a7-b44323215f2f-catalog-content\") pod \"redhat-marketplace-tlkct\" (UID: \"a6a5d27d-bc23-4df1-86a7-b44323215f2f\") " pod="openshift-marketplace/redhat-marketplace-tlkct" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.387565 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6a5d27d-bc23-4df1-86a7-b44323215f2f-utilities\") pod \"redhat-marketplace-tlkct\" (UID: \"a6a5d27d-bc23-4df1-86a7-b44323215f2f\") " pod="openshift-marketplace/redhat-marketplace-tlkct" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.387702 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km64g\" (UniqueName: \"kubernetes.io/projected/a6a5d27d-bc23-4df1-86a7-b44323215f2f-kube-api-access-km64g\") pod \"redhat-marketplace-tlkct\" (UID: \"a6a5d27d-bc23-4df1-86a7-b44323215f2f\") " pod="openshift-marketplace/redhat-marketplace-tlkct" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.469847 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v7wnw"] Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.477217 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v7wnw" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.480930 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.489346 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6a5d27d-bc23-4df1-86a7-b44323215f2f-catalog-content\") pod \"redhat-marketplace-tlkct\" (UID: \"a6a5d27d-bc23-4df1-86a7-b44323215f2f\") " pod="openshift-marketplace/redhat-marketplace-tlkct" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.489749 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6a5d27d-bc23-4df1-86a7-b44323215f2f-utilities\") pod \"redhat-marketplace-tlkct\" (UID: \"a6a5d27d-bc23-4df1-86a7-b44323215f2f\") " pod="openshift-marketplace/redhat-marketplace-tlkct" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.489943 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-km64g\" (UniqueName: \"kubernetes.io/projected/a6a5d27d-bc23-4df1-86a7-b44323215f2f-kube-api-access-km64g\") pod \"redhat-marketplace-tlkct\" (UID: \"a6a5d27d-bc23-4df1-86a7-b44323215f2f\") " pod="openshift-marketplace/redhat-marketplace-tlkct" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.491508 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6a5d27d-bc23-4df1-86a7-b44323215f2f-utilities\") pod \"redhat-marketplace-tlkct\" (UID: \"a6a5d27d-bc23-4df1-86a7-b44323215f2f\") " pod="openshift-marketplace/redhat-marketplace-tlkct" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.495539 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6a5d27d-bc23-4df1-86a7-b44323215f2f-catalog-content\") pod \"redhat-marketplace-tlkct\" (UID: \"a6a5d27d-bc23-4df1-86a7-b44323215f2f\") " pod="openshift-marketplace/redhat-marketplace-tlkct" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.510655 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="387ec0da-dcb1-4001-8439-5793c9384015" path="/var/lib/kubelet/pods/387ec0da-dcb1-4001-8439-5793c9384015/volumes" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.512181 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7386e430-c5f0-467b-9375-4eab8c181f1b" path="/var/lib/kubelet/pods/7386e430-c5f0-467b-9375-4eab8c181f1b/volumes" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.514004 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74a26206-1199-4cf4-912a-fa5e03a96713" path="/var/lib/kubelet/pods/74a26206-1199-4cf4-912a-fa5e03a96713/volumes" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.515501 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-km64g\" (UniqueName: \"kubernetes.io/projected/a6a5d27d-bc23-4df1-86a7-b44323215f2f-kube-api-access-km64g\") pod \"redhat-marketplace-tlkct\" (UID: \"a6a5d27d-bc23-4df1-86a7-b44323215f2f\") " pod="openshift-marketplace/redhat-marketplace-tlkct" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.516914 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7fdfe97-5098-4468-92b7-881bc4270004" path="/var/lib/kubelet/pods/a7fdfe97-5098-4468-92b7-881bc4270004/volumes" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.518213 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb9e5613-b1ca-483f-8efe-7c150933934b" path="/var/lib/kubelet/pods/fb9e5613-b1ca-483f-8efe-7c150933934b/volumes" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.520523 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v7wnw"] Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.591267 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2b9166f-e498-40ed-9e69-9223b30c69e2-utilities\") pod \"certified-operators-v7wnw\" (UID: \"d2b9166f-e498-40ed-9e69-9223b30c69e2\") " pod="openshift-marketplace/certified-operators-v7wnw" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.591360 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z24cs\" (UniqueName: \"kubernetes.io/projected/d2b9166f-e498-40ed-9e69-9223b30c69e2-kube-api-access-z24cs\") pod \"certified-operators-v7wnw\" (UID: \"d2b9166f-e498-40ed-9e69-9223b30c69e2\") " pod="openshift-marketplace/certified-operators-v7wnw" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.591433 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2b9166f-e498-40ed-9e69-9223b30c69e2-catalog-content\") pod \"certified-operators-v7wnw\" (UID: \"d2b9166f-e498-40ed-9e69-9223b30c69e2\") " pod="openshift-marketplace/certified-operators-v7wnw" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.620138 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tlkct" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.692890 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2b9166f-e498-40ed-9e69-9223b30c69e2-catalog-content\") pod \"certified-operators-v7wnw\" (UID: \"d2b9166f-e498-40ed-9e69-9223b30c69e2\") " pod="openshift-marketplace/certified-operators-v7wnw" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.692963 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2b9166f-e498-40ed-9e69-9223b30c69e2-utilities\") pod \"certified-operators-v7wnw\" (UID: \"d2b9166f-e498-40ed-9e69-9223b30c69e2\") " pod="openshift-marketplace/certified-operators-v7wnw" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.692995 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z24cs\" (UniqueName: \"kubernetes.io/projected/d2b9166f-e498-40ed-9e69-9223b30c69e2-kube-api-access-z24cs\") pod \"certified-operators-v7wnw\" (UID: \"d2b9166f-e498-40ed-9e69-9223b30c69e2\") " pod="openshift-marketplace/certified-operators-v7wnw" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.693699 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2b9166f-e498-40ed-9e69-9223b30c69e2-utilities\") pod \"certified-operators-v7wnw\" (UID: \"d2b9166f-e498-40ed-9e69-9223b30c69e2\") " pod="openshift-marketplace/certified-operators-v7wnw" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.693912 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2b9166f-e498-40ed-9e69-9223b30c69e2-catalog-content\") pod \"certified-operators-v7wnw\" (UID: \"d2b9166f-e498-40ed-9e69-9223b30c69e2\") " pod="openshift-marketplace/certified-operators-v7wnw" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.716282 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z24cs\" (UniqueName: \"kubernetes.io/projected/d2b9166f-e498-40ed-9e69-9223b30c69e2-kube-api-access-z24cs\") pod \"certified-operators-v7wnw\" (UID: \"d2b9166f-e498-40ed-9e69-9223b30c69e2\") " pod="openshift-marketplace/certified-operators-v7wnw" Feb 17 00:14:57 crc kubenswrapper[5109]: I0217 00:14:57.843144 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v7wnw" Feb 17 00:14:58 crc kubenswrapper[5109]: I0217 00:14:58.035469 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tlkct"] Feb 17 00:14:58 crc kubenswrapper[5109]: W0217 00:14:58.040880 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6a5d27d_bc23_4df1_86a7_b44323215f2f.slice/crio-24adba95caf48f56334dbd12649cd4223f474a7440075ecc07c0d74586514c54 WatchSource:0}: Error finding container 24adba95caf48f56334dbd12649cd4223f474a7440075ecc07c0d74586514c54: Status 404 returned error can't find the container with id 24adba95caf48f56334dbd12649cd4223f474a7440075ecc07c0d74586514c54 Feb 17 00:14:58 crc kubenswrapper[5109]: I0217 00:14:58.093269 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v7wnw"] Feb 17 00:14:58 crc kubenswrapper[5109]: W0217 00:14:58.099121 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2b9166f_e498_40ed_9e69_9223b30c69e2.slice/crio-8de620021a443a6721650912915c2c14eb3754263f884bdbbba04912fc9b9c88 WatchSource:0}: Error finding container 8de620021a443a6721650912915c2c14eb3754263f884bdbbba04912fc9b9c88: Status 404 returned error can't find the container with id 8de620021a443a6721650912915c2c14eb3754263f884bdbbba04912fc9b9c88 Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.007486 5109 generic.go:358] "Generic (PLEG): container finished" podID="a6a5d27d-bc23-4df1-86a7-b44323215f2f" containerID="add5ee43b9517c8d270eb8fc46c948d5acfb206a58200b5548580df09ab00eb2" exitCode=0 Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.007559 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlkct" event={"ID":"a6a5d27d-bc23-4df1-86a7-b44323215f2f","Type":"ContainerDied","Data":"add5ee43b9517c8d270eb8fc46c948d5acfb206a58200b5548580df09ab00eb2"} Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.007897 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlkct" event={"ID":"a6a5d27d-bc23-4df1-86a7-b44323215f2f","Type":"ContainerStarted","Data":"24adba95caf48f56334dbd12649cd4223f474a7440075ecc07c0d74586514c54"} Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.010252 5109 generic.go:358] "Generic (PLEG): container finished" podID="d2b9166f-e498-40ed-9e69-9223b30c69e2" containerID="b8cd1b2753df53a3cd277222b19b00b66344103eb808e748907353e508ee1061" exitCode=0 Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.011618 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7wnw" event={"ID":"d2b9166f-e498-40ed-9e69-9223b30c69e2","Type":"ContainerDied","Data":"b8cd1b2753df53a3cd277222b19b00b66344103eb808e748907353e508ee1061"} Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.011644 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7wnw" event={"ID":"d2b9166f-e498-40ed-9e69-9223b30c69e2","Type":"ContainerStarted","Data":"8de620021a443a6721650912915c2c14eb3754263f884bdbbba04912fc9b9c88"} Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.421535 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-q5fdc"] Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.426006 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-q5fdc" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.444727 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-q5fdc"] Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.518959 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1f68a053-dd02-485b-9596-b6033214d27e-registry-tls\") pod \"image-registry-5d9d95bf5b-q5fdc\" (UID: \"1f68a053-dd02-485b-9596-b6033214d27e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q5fdc" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.519127 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb2bh\" (UniqueName: \"kubernetes.io/projected/1f68a053-dd02-485b-9596-b6033214d27e-kube-api-access-cb2bh\") pod \"image-registry-5d9d95bf5b-q5fdc\" (UID: \"1f68a053-dd02-485b-9596-b6033214d27e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q5fdc" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.519260 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f68a053-dd02-485b-9596-b6033214d27e-trusted-ca\") pod \"image-registry-5d9d95bf5b-q5fdc\" (UID: \"1f68a053-dd02-485b-9596-b6033214d27e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q5fdc" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.519373 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1f68a053-dd02-485b-9596-b6033214d27e-bound-sa-token\") pod \"image-registry-5d9d95bf5b-q5fdc\" (UID: \"1f68a053-dd02-485b-9596-b6033214d27e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q5fdc" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.519497 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-q5fdc\" (UID: \"1f68a053-dd02-485b-9596-b6033214d27e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q5fdc" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.519669 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1f68a053-dd02-485b-9596-b6033214d27e-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-q5fdc\" (UID: \"1f68a053-dd02-485b-9596-b6033214d27e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q5fdc" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.519785 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1f68a053-dd02-485b-9596-b6033214d27e-registry-certificates\") pod \"image-registry-5d9d95bf5b-q5fdc\" (UID: \"1f68a053-dd02-485b-9596-b6033214d27e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q5fdc" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.519828 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1f68a053-dd02-485b-9596-b6033214d27e-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-q5fdc\" (UID: \"1f68a053-dd02-485b-9596-b6033214d27e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q5fdc" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.540009 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-q5fdc\" (UID: \"1f68a053-dd02-485b-9596-b6033214d27e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q5fdc" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.621098 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1f68a053-dd02-485b-9596-b6033214d27e-bound-sa-token\") pod \"image-registry-5d9d95bf5b-q5fdc\" (UID: \"1f68a053-dd02-485b-9596-b6033214d27e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q5fdc" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.621176 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1f68a053-dd02-485b-9596-b6033214d27e-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-q5fdc\" (UID: \"1f68a053-dd02-485b-9596-b6033214d27e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q5fdc" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.621206 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1f68a053-dd02-485b-9596-b6033214d27e-registry-certificates\") pod \"image-registry-5d9d95bf5b-q5fdc\" (UID: \"1f68a053-dd02-485b-9596-b6033214d27e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q5fdc" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.621251 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1f68a053-dd02-485b-9596-b6033214d27e-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-q5fdc\" (UID: \"1f68a053-dd02-485b-9596-b6033214d27e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q5fdc" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.621470 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1f68a053-dd02-485b-9596-b6033214d27e-registry-tls\") pod \"image-registry-5d9d95bf5b-q5fdc\" (UID: \"1f68a053-dd02-485b-9596-b6033214d27e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q5fdc" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.621534 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cb2bh\" (UniqueName: \"kubernetes.io/projected/1f68a053-dd02-485b-9596-b6033214d27e-kube-api-access-cb2bh\") pod \"image-registry-5d9d95bf5b-q5fdc\" (UID: \"1f68a053-dd02-485b-9596-b6033214d27e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q5fdc" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.621583 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f68a053-dd02-485b-9596-b6033214d27e-trusted-ca\") pod \"image-registry-5d9d95bf5b-q5fdc\" (UID: \"1f68a053-dd02-485b-9596-b6033214d27e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q5fdc" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.621841 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1f68a053-dd02-485b-9596-b6033214d27e-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-q5fdc\" (UID: \"1f68a053-dd02-485b-9596-b6033214d27e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q5fdc" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.622732 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1f68a053-dd02-485b-9596-b6033214d27e-registry-certificates\") pod \"image-registry-5d9d95bf5b-q5fdc\" (UID: \"1f68a053-dd02-485b-9596-b6033214d27e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q5fdc" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.622738 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f68a053-dd02-485b-9596-b6033214d27e-trusted-ca\") pod \"image-registry-5d9d95bf5b-q5fdc\" (UID: \"1f68a053-dd02-485b-9596-b6033214d27e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q5fdc" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.627027 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1f68a053-dd02-485b-9596-b6033214d27e-registry-tls\") pod \"image-registry-5d9d95bf5b-q5fdc\" (UID: \"1f68a053-dd02-485b-9596-b6033214d27e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q5fdc" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.627050 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1f68a053-dd02-485b-9596-b6033214d27e-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-q5fdc\" (UID: \"1f68a053-dd02-485b-9596-b6033214d27e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q5fdc" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.638218 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb2bh\" (UniqueName: \"kubernetes.io/projected/1f68a053-dd02-485b-9596-b6033214d27e-kube-api-access-cb2bh\") pod \"image-registry-5d9d95bf5b-q5fdc\" (UID: \"1f68a053-dd02-485b-9596-b6033214d27e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q5fdc" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.639349 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1f68a053-dd02-485b-9596-b6033214d27e-bound-sa-token\") pod \"image-registry-5d9d95bf5b-q5fdc\" (UID: \"1f68a053-dd02-485b-9596-b6033214d27e\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-q5fdc" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.674085 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lq4st"] Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.705016 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lq4st"] Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.705347 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lq4st" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.707941 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.770108 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-q5fdc" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.825623 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhkpn\" (UniqueName: \"kubernetes.io/projected/2b28e0b9-fd8e-4f55-9b5e-d7d74eb6760b-kube-api-access-vhkpn\") pod \"redhat-operators-lq4st\" (UID: \"2b28e0b9-fd8e-4f55-9b5e-d7d74eb6760b\") " pod="openshift-marketplace/redhat-operators-lq4st" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.825673 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b28e0b9-fd8e-4f55-9b5e-d7d74eb6760b-catalog-content\") pod \"redhat-operators-lq4st\" (UID: \"2b28e0b9-fd8e-4f55-9b5e-d7d74eb6760b\") " pod="openshift-marketplace/redhat-operators-lq4st" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.825746 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b28e0b9-fd8e-4f55-9b5e-d7d74eb6760b-utilities\") pod \"redhat-operators-lq4st\" (UID: \"2b28e0b9-fd8e-4f55-9b5e-d7d74eb6760b\") " pod="openshift-marketplace/redhat-operators-lq4st" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.867861 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p9bnr"] Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.878742 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9bnr" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.881071 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p9bnr"] Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.881733 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.928189 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vhkpn\" (UniqueName: \"kubernetes.io/projected/2b28e0b9-fd8e-4f55-9b5e-d7d74eb6760b-kube-api-access-vhkpn\") pod \"redhat-operators-lq4st\" (UID: \"2b28e0b9-fd8e-4f55-9b5e-d7d74eb6760b\") " pod="openshift-marketplace/redhat-operators-lq4st" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.930081 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b28e0b9-fd8e-4f55-9b5e-d7d74eb6760b-catalog-content\") pod \"redhat-operators-lq4st\" (UID: \"2b28e0b9-fd8e-4f55-9b5e-d7d74eb6760b\") " pod="openshift-marketplace/redhat-operators-lq4st" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.931720 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b28e0b9-fd8e-4f55-9b5e-d7d74eb6760b-catalog-content\") pod \"redhat-operators-lq4st\" (UID: \"2b28e0b9-fd8e-4f55-9b5e-d7d74eb6760b\") " pod="openshift-marketplace/redhat-operators-lq4st" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.931953 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b28e0b9-fd8e-4f55-9b5e-d7d74eb6760b-utilities\") pod \"redhat-operators-lq4st\" (UID: \"2b28e0b9-fd8e-4f55-9b5e-d7d74eb6760b\") " pod="openshift-marketplace/redhat-operators-lq4st" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.932392 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b28e0b9-fd8e-4f55-9b5e-d7d74eb6760b-utilities\") pod \"redhat-operators-lq4st\" (UID: \"2b28e0b9-fd8e-4f55-9b5e-d7d74eb6760b\") " pod="openshift-marketplace/redhat-operators-lq4st" Feb 17 00:14:59 crc kubenswrapper[5109]: I0217 00:14:59.947173 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhkpn\" (UniqueName: \"kubernetes.io/projected/2b28e0b9-fd8e-4f55-9b5e-d7d74eb6760b-kube-api-access-vhkpn\") pod \"redhat-operators-lq4st\" (UID: \"2b28e0b9-fd8e-4f55-9b5e-d7d74eb6760b\") " pod="openshift-marketplace/redhat-operators-lq4st" Feb 17 00:15:00 crc kubenswrapper[5109]: I0217 00:15:00.020159 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7wnw" event={"ID":"d2b9166f-e498-40ed-9e69-9223b30c69e2","Type":"ContainerStarted","Data":"03ff2d2330f3c5eaf9e23fdc990d9aac41e3334209150f492239ceac9cba83d4"} Feb 17 00:15:00 crc kubenswrapper[5109]: I0217 00:15:00.023529 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlkct" event={"ID":"a6a5d27d-bc23-4df1-86a7-b44323215f2f","Type":"ContainerStarted","Data":"62e3526f93f183b2188735c3a9834ee5df6ea156deedb61adb49dca78c49b807"} Feb 17 00:15:00 crc kubenswrapper[5109]: I0217 00:15:00.023611 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lq4st" Feb 17 00:15:00 crc kubenswrapper[5109]: I0217 00:15:00.032790 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-q5fdc"] Feb 17 00:15:00 crc kubenswrapper[5109]: I0217 00:15:00.033582 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19039840-56d5-49c2-b68c-ec50dc56a282-catalog-content\") pod \"community-operators-p9bnr\" (UID: \"19039840-56d5-49c2-b68c-ec50dc56a282\") " pod="openshift-marketplace/community-operators-p9bnr" Feb 17 00:15:00 crc kubenswrapper[5109]: I0217 00:15:00.033648 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bntsk\" (UniqueName: \"kubernetes.io/projected/19039840-56d5-49c2-b68c-ec50dc56a282-kube-api-access-bntsk\") pod \"community-operators-p9bnr\" (UID: \"19039840-56d5-49c2-b68c-ec50dc56a282\") " pod="openshift-marketplace/community-operators-p9bnr" Feb 17 00:15:00 crc kubenswrapper[5109]: I0217 00:15:00.033734 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19039840-56d5-49c2-b68c-ec50dc56a282-utilities\") pod \"community-operators-p9bnr\" (UID: \"19039840-56d5-49c2-b68c-ec50dc56a282\") " pod="openshift-marketplace/community-operators-p9bnr" Feb 17 00:15:00 crc kubenswrapper[5109]: W0217 00:15:00.043861 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f68a053_dd02_485b_9596_b6033214d27e.slice/crio-776fe9e33e157b63981b2f2af2042afe8a0e8c357690f240d016f4a80dcb1191 WatchSource:0}: Error finding container 776fe9e33e157b63981b2f2af2042afe8a0e8c357690f240d016f4a80dcb1191: Status 404 returned error can't find the container with id 776fe9e33e157b63981b2f2af2042afe8a0e8c357690f240d016f4a80dcb1191 Feb 17 00:15:00 crc kubenswrapper[5109]: I0217 00:15:00.135663 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19039840-56d5-49c2-b68c-ec50dc56a282-utilities\") pod \"community-operators-p9bnr\" (UID: \"19039840-56d5-49c2-b68c-ec50dc56a282\") " pod="openshift-marketplace/community-operators-p9bnr" Feb 17 00:15:00 crc kubenswrapper[5109]: I0217 00:15:00.135864 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19039840-56d5-49c2-b68c-ec50dc56a282-catalog-content\") pod \"community-operators-p9bnr\" (UID: \"19039840-56d5-49c2-b68c-ec50dc56a282\") " pod="openshift-marketplace/community-operators-p9bnr" Feb 17 00:15:00 crc kubenswrapper[5109]: I0217 00:15:00.135928 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bntsk\" (UniqueName: \"kubernetes.io/projected/19039840-56d5-49c2-b68c-ec50dc56a282-kube-api-access-bntsk\") pod \"community-operators-p9bnr\" (UID: \"19039840-56d5-49c2-b68c-ec50dc56a282\") " pod="openshift-marketplace/community-operators-p9bnr" Feb 17 00:15:00 crc kubenswrapper[5109]: I0217 00:15:00.136369 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19039840-56d5-49c2-b68c-ec50dc56a282-catalog-content\") pod \"community-operators-p9bnr\" (UID: \"19039840-56d5-49c2-b68c-ec50dc56a282\") " pod="openshift-marketplace/community-operators-p9bnr" Feb 17 00:15:00 crc kubenswrapper[5109]: I0217 00:15:00.136877 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19039840-56d5-49c2-b68c-ec50dc56a282-utilities\") pod \"community-operators-p9bnr\" (UID: \"19039840-56d5-49c2-b68c-ec50dc56a282\") " pod="openshift-marketplace/community-operators-p9bnr" Feb 17 00:15:00 crc kubenswrapper[5109]: I0217 00:15:00.140904 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521455-jnlpf"] Feb 17 00:15:00 crc kubenswrapper[5109]: I0217 00:15:00.164609 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bntsk\" (UniqueName: \"kubernetes.io/projected/19039840-56d5-49c2-b68c-ec50dc56a282-kube-api-access-bntsk\") pod \"community-operators-p9bnr\" (UID: \"19039840-56d5-49c2-b68c-ec50dc56a282\") " pod="openshift-marketplace/community-operators-p9bnr" Feb 17 00:15:00 crc kubenswrapper[5109]: W0217 00:15:00.211948 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b28e0b9_fd8e_4f55_9b5e_d7d74eb6760b.slice/crio-917f6583159fe224797147ddc222abe6dbf529307f794049854f69ba35ada287 WatchSource:0}: Error finding container 917f6583159fe224797147ddc222abe6dbf529307f794049854f69ba35ada287: Status 404 returned error can't find the container with id 917f6583159fe224797147ddc222abe6dbf529307f794049854f69ba35ada287 Feb 17 00:15:00 crc kubenswrapper[5109]: I0217 00:15:00.221926 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9bnr" Feb 17 00:15:00 crc kubenswrapper[5109]: I0217 00:15:00.300618 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521455-jnlpf"] Feb 17 00:15:00 crc kubenswrapper[5109]: I0217 00:15:00.301120 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lq4st"] Feb 17 00:15:00 crc kubenswrapper[5109]: I0217 00:15:00.300767 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-jnlpf" Feb 17 00:15:00 crc kubenswrapper[5109]: I0217 00:15:00.305925 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 17 00:15:00 crc kubenswrapper[5109]: I0217 00:15:00.306802 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 17 00:15:00 crc kubenswrapper[5109]: I0217 00:15:00.433793 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p9bnr"] Feb 17 00:15:00 crc kubenswrapper[5109]: I0217 00:15:00.441156 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv26g\" (UniqueName: \"kubernetes.io/projected/5817110e-6416-46b2-a95a-56c8ba1d3117-kube-api-access-kv26g\") pod \"collect-profiles-29521455-jnlpf\" (UID: \"5817110e-6416-46b2-a95a-56c8ba1d3117\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-jnlpf" Feb 17 00:15:00 crc kubenswrapper[5109]: I0217 00:15:00.441269 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5817110e-6416-46b2-a95a-56c8ba1d3117-secret-volume\") pod \"collect-profiles-29521455-jnlpf\" (UID: \"5817110e-6416-46b2-a95a-56c8ba1d3117\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-jnlpf" Feb 17 00:15:00 crc kubenswrapper[5109]: I0217 00:15:00.441330 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5817110e-6416-46b2-a95a-56c8ba1d3117-config-volume\") pod \"collect-profiles-29521455-jnlpf\" (UID: \"5817110e-6416-46b2-a95a-56c8ba1d3117\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-jnlpf" Feb 17 00:15:00 crc kubenswrapper[5109]: I0217 00:15:00.542745 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5817110e-6416-46b2-a95a-56c8ba1d3117-config-volume\") pod \"collect-profiles-29521455-jnlpf\" (UID: \"5817110e-6416-46b2-a95a-56c8ba1d3117\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-jnlpf" Feb 17 00:15:00 crc kubenswrapper[5109]: I0217 00:15:00.542795 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kv26g\" (UniqueName: \"kubernetes.io/projected/5817110e-6416-46b2-a95a-56c8ba1d3117-kube-api-access-kv26g\") pod \"collect-profiles-29521455-jnlpf\" (UID: \"5817110e-6416-46b2-a95a-56c8ba1d3117\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-jnlpf" Feb 17 00:15:00 crc kubenswrapper[5109]: I0217 00:15:00.542840 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5817110e-6416-46b2-a95a-56c8ba1d3117-secret-volume\") pod \"collect-profiles-29521455-jnlpf\" (UID: \"5817110e-6416-46b2-a95a-56c8ba1d3117\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-jnlpf" Feb 17 00:15:00 crc kubenswrapper[5109]: I0217 00:15:00.543560 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5817110e-6416-46b2-a95a-56c8ba1d3117-config-volume\") pod \"collect-profiles-29521455-jnlpf\" (UID: \"5817110e-6416-46b2-a95a-56c8ba1d3117\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-jnlpf" Feb 17 00:15:00 crc kubenswrapper[5109]: I0217 00:15:00.548211 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5817110e-6416-46b2-a95a-56c8ba1d3117-secret-volume\") pod \"collect-profiles-29521455-jnlpf\" (UID: \"5817110e-6416-46b2-a95a-56c8ba1d3117\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-jnlpf" Feb 17 00:15:00 crc kubenswrapper[5109]: I0217 00:15:00.563061 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv26g\" (UniqueName: \"kubernetes.io/projected/5817110e-6416-46b2-a95a-56c8ba1d3117-kube-api-access-kv26g\") pod \"collect-profiles-29521455-jnlpf\" (UID: \"5817110e-6416-46b2-a95a-56c8ba1d3117\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-jnlpf" Feb 17 00:15:00 crc kubenswrapper[5109]: I0217 00:15:00.614465 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-jnlpf" Feb 17 00:15:00 crc kubenswrapper[5109]: I0217 00:15:00.787225 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521455-jnlpf"] Feb 17 00:15:00 crc kubenswrapper[5109]: W0217 00:15:00.792306 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5817110e_6416_46b2_a95a_56c8ba1d3117.slice/crio-5a6ece77986535a641b0a881920433ddca97bb8365da450386e25a74aaa6f45a WatchSource:0}: Error finding container 5a6ece77986535a641b0a881920433ddca97bb8365da450386e25a74aaa6f45a: Status 404 returned error can't find the container with id 5a6ece77986535a641b0a881920433ddca97bb8365da450386e25a74aaa6f45a Feb 17 00:15:01 crc kubenswrapper[5109]: I0217 00:15:01.032203 5109 generic.go:358] "Generic (PLEG): container finished" podID="19039840-56d5-49c2-b68c-ec50dc56a282" containerID="d732148f3f7a2f6d2ef39e863e0302f4107e7eb1fbe3956f4e906ddbac730346" exitCode=0 Feb 17 00:15:01 crc kubenswrapper[5109]: I0217 00:15:01.032358 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9bnr" event={"ID":"19039840-56d5-49c2-b68c-ec50dc56a282","Type":"ContainerDied","Data":"d732148f3f7a2f6d2ef39e863e0302f4107e7eb1fbe3956f4e906ddbac730346"} Feb 17 00:15:01 crc kubenswrapper[5109]: I0217 00:15:01.034003 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9bnr" event={"ID":"19039840-56d5-49c2-b68c-ec50dc56a282","Type":"ContainerStarted","Data":"3195d873b8dd59acc3613963cf22738fd0122fc39e878473cb22c55dd7fa64dc"} Feb 17 00:15:01 crc kubenswrapper[5109]: I0217 00:15:01.037976 5109 generic.go:358] "Generic (PLEG): container finished" podID="a6a5d27d-bc23-4df1-86a7-b44323215f2f" containerID="62e3526f93f183b2188735c3a9834ee5df6ea156deedb61adb49dca78c49b807" exitCode=0 Feb 17 00:15:01 crc kubenswrapper[5109]: I0217 00:15:01.038126 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlkct" event={"ID":"a6a5d27d-bc23-4df1-86a7-b44323215f2f","Type":"ContainerDied","Data":"62e3526f93f183b2188735c3a9834ee5df6ea156deedb61adb49dca78c49b807"} Feb 17 00:15:01 crc kubenswrapper[5109]: I0217 00:15:01.039101 5109 generic.go:358] "Generic (PLEG): container finished" podID="2b28e0b9-fd8e-4f55-9b5e-d7d74eb6760b" containerID="e5f4d6d136927921b3f4cca2a1a3e23a518c620ad1b5679c7bad2811238b4c31" exitCode=0 Feb 17 00:15:01 crc kubenswrapper[5109]: I0217 00:15:01.039210 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lq4st" event={"ID":"2b28e0b9-fd8e-4f55-9b5e-d7d74eb6760b","Type":"ContainerDied","Data":"e5f4d6d136927921b3f4cca2a1a3e23a518c620ad1b5679c7bad2811238b4c31"} Feb 17 00:15:01 crc kubenswrapper[5109]: I0217 00:15:01.039239 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lq4st" event={"ID":"2b28e0b9-fd8e-4f55-9b5e-d7d74eb6760b","Type":"ContainerStarted","Data":"917f6583159fe224797147ddc222abe6dbf529307f794049854f69ba35ada287"} Feb 17 00:15:01 crc kubenswrapper[5109]: I0217 00:15:01.044011 5109 generic.go:358] "Generic (PLEG): container finished" podID="d2b9166f-e498-40ed-9e69-9223b30c69e2" containerID="03ff2d2330f3c5eaf9e23fdc990d9aac41e3334209150f492239ceac9cba83d4" exitCode=0 Feb 17 00:15:01 crc kubenswrapper[5109]: I0217 00:15:01.044489 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7wnw" event={"ID":"d2b9166f-e498-40ed-9e69-9223b30c69e2","Type":"ContainerDied","Data":"03ff2d2330f3c5eaf9e23fdc990d9aac41e3334209150f492239ceac9cba83d4"} Feb 17 00:15:01 crc kubenswrapper[5109]: I0217 00:15:01.050817 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-q5fdc" event={"ID":"1f68a053-dd02-485b-9596-b6033214d27e","Type":"ContainerStarted","Data":"f7544ddf314506dac47ee2671ae9d4be94433d11bce264bb4ce75139c08a6b4b"} Feb 17 00:15:01 crc kubenswrapper[5109]: I0217 00:15:01.050877 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-q5fdc" event={"ID":"1f68a053-dd02-485b-9596-b6033214d27e","Type":"ContainerStarted","Data":"776fe9e33e157b63981b2f2af2042afe8a0e8c357690f240d016f4a80dcb1191"} Feb 17 00:15:01 crc kubenswrapper[5109]: I0217 00:15:01.055065 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-jnlpf" event={"ID":"5817110e-6416-46b2-a95a-56c8ba1d3117","Type":"ContainerStarted","Data":"4960483a8a099d5e8eb04ce36faf1d3fa764b25a7b0d647d04555d1c2d3fc84e"} Feb 17 00:15:01 crc kubenswrapper[5109]: I0217 00:15:01.055227 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-jnlpf" event={"ID":"5817110e-6416-46b2-a95a-56c8ba1d3117","Type":"ContainerStarted","Data":"5a6ece77986535a641b0a881920433ddca97bb8365da450386e25a74aaa6f45a"} Feb 17 00:15:01 crc kubenswrapper[5109]: I0217 00:15:01.056979 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-q5fdc" Feb 17 00:15:01 crc kubenswrapper[5109]: I0217 00:15:01.134251 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-q5fdc" podStartSLOduration=2.1342306620000002 podStartE2EDuration="2.134230662s" podCreationTimestamp="2026-02-17 00:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:15:01.127242951 +0000 UTC m=+372.458797739" watchObservedRunningTime="2026-02-17 00:15:01.134230662 +0000 UTC m=+372.465785430" Feb 17 00:15:01 crc kubenswrapper[5109]: I0217 00:15:01.150654 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-jnlpf" podStartSLOduration=1.150634988 podStartE2EDuration="1.150634988s" podCreationTimestamp="2026-02-17 00:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:15:01.145208847 +0000 UTC m=+372.476763615" watchObservedRunningTime="2026-02-17 00:15:01.150634988 +0000 UTC m=+372.482189746" Feb 17 00:15:02 crc kubenswrapper[5109]: I0217 00:15:02.061673 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlkct" event={"ID":"a6a5d27d-bc23-4df1-86a7-b44323215f2f","Type":"ContainerStarted","Data":"c6795a232a7baa19a7e8fb13f508ee8bea6438b001d4c4747347b67a1edb0c8c"} Feb 17 00:15:02 crc kubenswrapper[5109]: I0217 00:15:02.071807 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v7wnw" event={"ID":"d2b9166f-e498-40ed-9e69-9223b30c69e2","Type":"ContainerStarted","Data":"bb08ef02182330edd1b253cb62acb0cbdb2e81929b526c3b7500d6b3fa346a0b"} Feb 17 00:15:02 crc kubenswrapper[5109]: I0217 00:15:02.073216 5109 generic.go:358] "Generic (PLEG): container finished" podID="5817110e-6416-46b2-a95a-56c8ba1d3117" containerID="4960483a8a099d5e8eb04ce36faf1d3fa764b25a7b0d647d04555d1c2d3fc84e" exitCode=0 Feb 17 00:15:02 crc kubenswrapper[5109]: I0217 00:15:02.073267 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-jnlpf" event={"ID":"5817110e-6416-46b2-a95a-56c8ba1d3117","Type":"ContainerDied","Data":"4960483a8a099d5e8eb04ce36faf1d3fa764b25a7b0d647d04555d1c2d3fc84e"} Feb 17 00:15:02 crc kubenswrapper[5109]: I0217 00:15:02.076046 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9bnr" event={"ID":"19039840-56d5-49c2-b68c-ec50dc56a282","Type":"ContainerStarted","Data":"e51bb6a39c745535e5c74b89a7e880790c69dd67572f19121598bbe0df46a3a7"} Feb 17 00:15:02 crc kubenswrapper[5109]: I0217 00:15:02.104107 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tlkct" podStartSLOduration=4.286006769 podStartE2EDuration="5.104095077s" podCreationTimestamp="2026-02-17 00:14:57 +0000 UTC" firstStartedPulling="2026-02-17 00:14:59.010714926 +0000 UTC m=+370.342269724" lastFinishedPulling="2026-02-17 00:14:59.828803274 +0000 UTC m=+371.160358032" observedRunningTime="2026-02-17 00:15:02.081161312 +0000 UTC m=+373.412716070" watchObservedRunningTime="2026-02-17 00:15:02.104095077 +0000 UTC m=+373.435649835" Feb 17 00:15:02 crc kubenswrapper[5109]: I0217 00:15:02.118627 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v7wnw" podStartSLOduration=4.331350105 podStartE2EDuration="5.118607453s" podCreationTimestamp="2026-02-17 00:14:57 +0000 UTC" firstStartedPulling="2026-02-17 00:14:59.011468626 +0000 UTC m=+370.343023384" lastFinishedPulling="2026-02-17 00:14:59.798725964 +0000 UTC m=+371.130280732" observedRunningTime="2026-02-17 00:15:02.11424468 +0000 UTC m=+373.445799438" watchObservedRunningTime="2026-02-17 00:15:02.118607453 +0000 UTC m=+373.450162221" Feb 17 00:15:03 crc kubenswrapper[5109]: I0217 00:15:03.083949 5109 generic.go:358] "Generic (PLEG): container finished" podID="2b28e0b9-fd8e-4f55-9b5e-d7d74eb6760b" containerID="5760bf0841e4b0fe550938ad3a300001fc349ebd4071f5bc4616c248f3536337" exitCode=0 Feb 17 00:15:03 crc kubenswrapper[5109]: I0217 00:15:03.084352 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lq4st" event={"ID":"2b28e0b9-fd8e-4f55-9b5e-d7d74eb6760b","Type":"ContainerDied","Data":"5760bf0841e4b0fe550938ad3a300001fc349ebd4071f5bc4616c248f3536337"} Feb 17 00:15:03 crc kubenswrapper[5109]: I0217 00:15:03.088830 5109 generic.go:358] "Generic (PLEG): container finished" podID="19039840-56d5-49c2-b68c-ec50dc56a282" containerID="e51bb6a39c745535e5c74b89a7e880790c69dd67572f19121598bbe0df46a3a7" exitCode=0 Feb 17 00:15:03 crc kubenswrapper[5109]: I0217 00:15:03.089106 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9bnr" event={"ID":"19039840-56d5-49c2-b68c-ec50dc56a282","Type":"ContainerDied","Data":"e51bb6a39c745535e5c74b89a7e880790c69dd67572f19121598bbe0df46a3a7"} Feb 17 00:15:03 crc kubenswrapper[5109]: I0217 00:15:03.363096 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-jnlpf" Feb 17 00:15:03 crc kubenswrapper[5109]: I0217 00:15:03.479901 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv26g\" (UniqueName: \"kubernetes.io/projected/5817110e-6416-46b2-a95a-56c8ba1d3117-kube-api-access-kv26g\") pod \"5817110e-6416-46b2-a95a-56c8ba1d3117\" (UID: \"5817110e-6416-46b2-a95a-56c8ba1d3117\") " Feb 17 00:15:03 crc kubenswrapper[5109]: I0217 00:15:03.479955 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5817110e-6416-46b2-a95a-56c8ba1d3117-config-volume\") pod \"5817110e-6416-46b2-a95a-56c8ba1d3117\" (UID: \"5817110e-6416-46b2-a95a-56c8ba1d3117\") " Feb 17 00:15:03 crc kubenswrapper[5109]: I0217 00:15:03.480017 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5817110e-6416-46b2-a95a-56c8ba1d3117-secret-volume\") pod \"5817110e-6416-46b2-a95a-56c8ba1d3117\" (UID: \"5817110e-6416-46b2-a95a-56c8ba1d3117\") " Feb 17 00:15:03 crc kubenswrapper[5109]: I0217 00:15:03.480796 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5817110e-6416-46b2-a95a-56c8ba1d3117-config-volume" (OuterVolumeSpecName: "config-volume") pod "5817110e-6416-46b2-a95a-56c8ba1d3117" (UID: "5817110e-6416-46b2-a95a-56c8ba1d3117"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:15:03 crc kubenswrapper[5109]: I0217 00:15:03.481084 5109 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5817110e-6416-46b2-a95a-56c8ba1d3117-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 00:15:03 crc kubenswrapper[5109]: I0217 00:15:03.486635 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5817110e-6416-46b2-a95a-56c8ba1d3117-kube-api-access-kv26g" (OuterVolumeSpecName: "kube-api-access-kv26g") pod "5817110e-6416-46b2-a95a-56c8ba1d3117" (UID: "5817110e-6416-46b2-a95a-56c8ba1d3117"). InnerVolumeSpecName "kube-api-access-kv26g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:15:03 crc kubenswrapper[5109]: I0217 00:15:03.486811 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5817110e-6416-46b2-a95a-56c8ba1d3117-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5817110e-6416-46b2-a95a-56c8ba1d3117" (UID: "5817110e-6416-46b2-a95a-56c8ba1d3117"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:15:03 crc kubenswrapper[5109]: I0217 00:15:03.582180 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kv26g\" (UniqueName: \"kubernetes.io/projected/5817110e-6416-46b2-a95a-56c8ba1d3117-kube-api-access-kv26g\") on node \"crc\" DevicePath \"\"" Feb 17 00:15:03 crc kubenswrapper[5109]: I0217 00:15:03.582224 5109 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5817110e-6416-46b2-a95a-56c8ba1d3117-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 00:15:04 crc kubenswrapper[5109]: I0217 00:15:04.096527 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-jnlpf" Feb 17 00:15:04 crc kubenswrapper[5109]: I0217 00:15:04.096535 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521455-jnlpf" event={"ID":"5817110e-6416-46b2-a95a-56c8ba1d3117","Type":"ContainerDied","Data":"5a6ece77986535a641b0a881920433ddca97bb8365da450386e25a74aaa6f45a"} Feb 17 00:15:04 crc kubenswrapper[5109]: I0217 00:15:04.096958 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a6ece77986535a641b0a881920433ddca97bb8365da450386e25a74aaa6f45a" Feb 17 00:15:04 crc kubenswrapper[5109]: I0217 00:15:04.099275 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9bnr" event={"ID":"19039840-56d5-49c2-b68c-ec50dc56a282","Type":"ContainerStarted","Data":"22c960e67f4f028ae6319073b8299269a7c6878b282de0f150d6393a6e851c06"} Feb 17 00:15:04 crc kubenswrapper[5109]: I0217 00:15:04.101337 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lq4st" event={"ID":"2b28e0b9-fd8e-4f55-9b5e-d7d74eb6760b","Type":"ContainerStarted","Data":"04aba3b9d53c662b1a9bfb561268c620b0d14218fcd9a1c17fcaac5e854a24b2"} Feb 17 00:15:04 crc kubenswrapper[5109]: I0217 00:15:04.142583 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p9bnr" podStartSLOduration=4.330758043 podStartE2EDuration="5.142565879s" podCreationTimestamp="2026-02-17 00:14:59 +0000 UTC" firstStartedPulling="2026-02-17 00:15:01.042640467 +0000 UTC m=+372.374195245" lastFinishedPulling="2026-02-17 00:15:01.854448333 +0000 UTC m=+373.186003081" observedRunningTime="2026-02-17 00:15:04.139825878 +0000 UTC m=+375.471380646" watchObservedRunningTime="2026-02-17 00:15:04.142565879 +0000 UTC m=+375.474120637" Feb 17 00:15:04 crc kubenswrapper[5109]: I0217 00:15:04.178420 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lq4st" podStartSLOduration=4.2223161 podStartE2EDuration="5.178403398s" podCreationTimestamp="2026-02-17 00:14:59 +0000 UTC" firstStartedPulling="2026-02-17 00:15:01.041054095 +0000 UTC m=+372.372608893" lastFinishedPulling="2026-02-17 00:15:01.997141433 +0000 UTC m=+373.328696191" observedRunningTime="2026-02-17 00:15:04.177853114 +0000 UTC m=+375.509407872" watchObservedRunningTime="2026-02-17 00:15:04.178403398 +0000 UTC m=+375.509958156" Feb 17 00:15:07 crc kubenswrapper[5109]: I0217 00:15:07.621060 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-tlkct" Feb 17 00:15:07 crc kubenswrapper[5109]: I0217 00:15:07.622038 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tlkct" Feb 17 00:15:07 crc kubenswrapper[5109]: I0217 00:15:07.668755 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tlkct" Feb 17 00:15:07 crc kubenswrapper[5109]: I0217 00:15:07.843500 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-v7wnw" Feb 17 00:15:07 crc kubenswrapper[5109]: I0217 00:15:07.843548 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v7wnw" Feb 17 00:15:07 crc kubenswrapper[5109]: I0217 00:15:07.880211 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v7wnw" Feb 17 00:15:08 crc kubenswrapper[5109]: I0217 00:15:08.223612 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v7wnw" Feb 17 00:15:08 crc kubenswrapper[5109]: I0217 00:15:08.224277 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tlkct" Feb 17 00:15:10 crc kubenswrapper[5109]: I0217 00:15:10.023838 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-lq4st" Feb 17 00:15:10 crc kubenswrapper[5109]: I0217 00:15:10.024973 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lq4st" Feb 17 00:15:10 crc kubenswrapper[5109]: I0217 00:15:10.070529 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lq4st" Feb 17 00:15:10 crc kubenswrapper[5109]: I0217 00:15:10.222724 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p9bnr" Feb 17 00:15:10 crc kubenswrapper[5109]: I0217 00:15:10.223714 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-p9bnr" Feb 17 00:15:10 crc kubenswrapper[5109]: I0217 00:15:10.237727 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lq4st" Feb 17 00:15:10 crc kubenswrapper[5109]: I0217 00:15:10.269581 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p9bnr" Feb 17 00:15:11 crc kubenswrapper[5109]: I0217 00:15:11.255633 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p9bnr" Feb 17 00:15:23 crc kubenswrapper[5109]: I0217 00:15:23.103117 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-q5fdc" Feb 17 00:15:23 crc kubenswrapper[5109]: I0217 00:15:23.182530 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-6jz6g"] Feb 17 00:15:50 crc kubenswrapper[5109]: I0217 00:15:50.791968 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" podUID="9c5b02a2-437a-46c3-b4ce-d856b61053f6" containerName="registry" containerID="cri-o://b6c623978a305678878d886310b8580773fc937299ac17e3b9e76bb263542e3d" gracePeriod=28 Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.192541 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.274880 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9c5b02a2-437a-46c3-b4ce-d856b61053f6-bound-sa-token\") pod \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.275011 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9c5b02a2-437a-46c3-b4ce-d856b61053f6-ca-trust-extracted\") pod \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.275059 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9c5b02a2-437a-46c3-b4ce-d856b61053f6-registry-certificates\") pod \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.275110 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c5b02a2-437a-46c3-b4ce-d856b61053f6-trusted-ca\") pod \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.275156 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9c5b02a2-437a-46c3-b4ce-d856b61053f6-installation-pull-secrets\") pod \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.275295 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.275321 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9c5b02a2-437a-46c3-b4ce-d856b61053f6-registry-tls\") pod \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.275369 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjcdm\" (UniqueName: \"kubernetes.io/projected/9c5b02a2-437a-46c3-b4ce-d856b61053f6-kube-api-access-hjcdm\") pod \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\" (UID: \"9c5b02a2-437a-46c3-b4ce-d856b61053f6\") " Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.276242 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c5b02a2-437a-46c3-b4ce-d856b61053f6-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9c5b02a2-437a-46c3-b4ce-d856b61053f6" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.276562 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c5b02a2-437a-46c3-b4ce-d856b61053f6-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9c5b02a2-437a-46c3-b4ce-d856b61053f6" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.283407 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c5b02a2-437a-46c3-b4ce-d856b61053f6-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9c5b02a2-437a-46c3-b4ce-d856b61053f6" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.286242 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "9c5b02a2-437a-46c3-b4ce-d856b61053f6" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.290931 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c5b02a2-437a-46c3-b4ce-d856b61053f6-kube-api-access-hjcdm" (OuterVolumeSpecName: "kube-api-access-hjcdm") pod "9c5b02a2-437a-46c3-b4ce-d856b61053f6" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6"). InnerVolumeSpecName "kube-api-access-hjcdm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.290930 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c5b02a2-437a-46c3-b4ce-d856b61053f6-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9c5b02a2-437a-46c3-b4ce-d856b61053f6" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.291074 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c5b02a2-437a-46c3-b4ce-d856b61053f6-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9c5b02a2-437a-46c3-b4ce-d856b61053f6" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.300234 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c5b02a2-437a-46c3-b4ce-d856b61053f6-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9c5b02a2-437a-46c3-b4ce-d856b61053f6" (UID: "9c5b02a2-437a-46c3-b4ce-d856b61053f6"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.377404 5109 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9c5b02a2-437a-46c3-b4ce-d856b61053f6-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.377477 5109 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9c5b02a2-437a-46c3-b4ce-d856b61053f6-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.377503 5109 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c5b02a2-437a-46c3-b4ce-d856b61053f6-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.377521 5109 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9c5b02a2-437a-46c3-b4ce-d856b61053f6-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.377637 5109 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9c5b02a2-437a-46c3-b4ce-d856b61053f6-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.377665 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hjcdm\" (UniqueName: \"kubernetes.io/projected/9c5b02a2-437a-46c3-b4ce-d856b61053f6-kube-api-access-hjcdm\") on node \"crc\" DevicePath \"\"" Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.377689 5109 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9c5b02a2-437a-46c3-b4ce-d856b61053f6-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.879870 5109 generic.go:358] "Generic (PLEG): container finished" podID="9c5b02a2-437a-46c3-b4ce-d856b61053f6" containerID="b6c623978a305678878d886310b8580773fc937299ac17e3b9e76bb263542e3d" exitCode=0 Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.879961 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" event={"ID":"9c5b02a2-437a-46c3-b4ce-d856b61053f6","Type":"ContainerDied","Data":"b6c623978a305678878d886310b8580773fc937299ac17e3b9e76bb263542e3d"} Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.880017 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" event={"ID":"9c5b02a2-437a-46c3-b4ce-d856b61053f6","Type":"ContainerDied","Data":"a589723eb4ed0916bdf38360cd9240be2b1cc1e993fd29f9009e9233a5a253ae"} Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.880055 5109 scope.go:117] "RemoveContainer" containerID="b6c623978a305678878d886310b8580773fc937299ac17e3b9e76bb263542e3d" Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.880083 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-6jz6g" Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.913331 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-6jz6g"] Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.919791 5109 scope.go:117] "RemoveContainer" containerID="b6c623978a305678878d886310b8580773fc937299ac17e3b9e76bb263542e3d" Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.920156 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-6jz6g"] Feb 17 00:15:51 crc kubenswrapper[5109]: E0217 00:15:51.920529 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6c623978a305678878d886310b8580773fc937299ac17e3b9e76bb263542e3d\": container with ID starting with b6c623978a305678878d886310b8580773fc937299ac17e3b9e76bb263542e3d not found: ID does not exist" containerID="b6c623978a305678878d886310b8580773fc937299ac17e3b9e76bb263542e3d" Feb 17 00:15:51 crc kubenswrapper[5109]: I0217 00:15:51.920586 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6c623978a305678878d886310b8580773fc937299ac17e3b9e76bb263542e3d"} err="failed to get container status \"b6c623978a305678878d886310b8580773fc937299ac17e3b9e76bb263542e3d\": rpc error: code = NotFound desc = could not find container \"b6c623978a305678878d886310b8580773fc937299ac17e3b9e76bb263542e3d\": container with ID starting with b6c623978a305678878d886310b8580773fc937299ac17e3b9e76bb263542e3d not found: ID does not exist" Feb 17 00:15:53 crc kubenswrapper[5109]: I0217 00:15:53.478466 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c5b02a2-437a-46c3-b4ce-d856b61053f6" path="/var/lib/kubelet/pods/9c5b02a2-437a-46c3-b4ce-d856b61053f6/volumes" Feb 17 00:16:00 crc kubenswrapper[5109]: I0217 00:16:00.131469 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29521456-hwxxh"] Feb 17 00:16:00 crc kubenswrapper[5109]: I0217 00:16:00.132551 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9c5b02a2-437a-46c3-b4ce-d856b61053f6" containerName="registry" Feb 17 00:16:00 crc kubenswrapper[5109]: I0217 00:16:00.132569 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c5b02a2-437a-46c3-b4ce-d856b61053f6" containerName="registry" Feb 17 00:16:00 crc kubenswrapper[5109]: I0217 00:16:00.132622 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5817110e-6416-46b2-a95a-56c8ba1d3117" containerName="collect-profiles" Feb 17 00:16:00 crc kubenswrapper[5109]: I0217 00:16:00.132630 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="5817110e-6416-46b2-a95a-56c8ba1d3117" containerName="collect-profiles" Feb 17 00:16:00 crc kubenswrapper[5109]: I0217 00:16:00.132735 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="9c5b02a2-437a-46c3-b4ce-d856b61053f6" containerName="registry" Feb 17 00:16:00 crc kubenswrapper[5109]: I0217 00:16:00.132754 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="5817110e-6416-46b2-a95a-56c8ba1d3117" containerName="collect-profiles" Feb 17 00:16:00 crc kubenswrapper[5109]: I0217 00:16:00.146025 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29521456-hwxxh"] Feb 17 00:16:00 crc kubenswrapper[5109]: I0217 00:16:00.146162 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521456-hwxxh" Feb 17 00:16:00 crc kubenswrapper[5109]: I0217 00:16:00.149168 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-r4lwp\"" Feb 17 00:16:00 crc kubenswrapper[5109]: I0217 00:16:00.149522 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 17 00:16:00 crc kubenswrapper[5109]: I0217 00:16:00.150689 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 17 00:16:00 crc kubenswrapper[5109]: I0217 00:16:00.214379 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfd72\" (UniqueName: \"kubernetes.io/projected/d5d6f33d-c80d-442d-9f91-d0162e328c59-kube-api-access-lfd72\") pod \"auto-csr-approver-29521456-hwxxh\" (UID: \"d5d6f33d-c80d-442d-9f91-d0162e328c59\") " pod="openshift-infra/auto-csr-approver-29521456-hwxxh" Feb 17 00:16:00 crc kubenswrapper[5109]: I0217 00:16:00.315501 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lfd72\" (UniqueName: \"kubernetes.io/projected/d5d6f33d-c80d-442d-9f91-d0162e328c59-kube-api-access-lfd72\") pod \"auto-csr-approver-29521456-hwxxh\" (UID: \"d5d6f33d-c80d-442d-9f91-d0162e328c59\") " pod="openshift-infra/auto-csr-approver-29521456-hwxxh" Feb 17 00:16:00 crc kubenswrapper[5109]: I0217 00:16:00.340923 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfd72\" (UniqueName: \"kubernetes.io/projected/d5d6f33d-c80d-442d-9f91-d0162e328c59-kube-api-access-lfd72\") pod \"auto-csr-approver-29521456-hwxxh\" (UID: \"d5d6f33d-c80d-442d-9f91-d0162e328c59\") " pod="openshift-infra/auto-csr-approver-29521456-hwxxh" Feb 17 00:16:00 crc kubenswrapper[5109]: I0217 00:16:00.475036 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521456-hwxxh" Feb 17 00:16:00 crc kubenswrapper[5109]: I0217 00:16:00.729307 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29521456-hwxxh"] Feb 17 00:16:00 crc kubenswrapper[5109]: W0217 00:16:00.734139 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5d6f33d_c80d_442d_9f91_d0162e328c59.slice/crio-d2069d2673bfe6b61c78c2318728eb3dfae190da6a30c966478c1461d05abc98 WatchSource:0}: Error finding container d2069d2673bfe6b61c78c2318728eb3dfae190da6a30c966478c1461d05abc98: Status 404 returned error can't find the container with id d2069d2673bfe6b61c78c2318728eb3dfae190da6a30c966478c1461d05abc98 Feb 17 00:16:00 crc kubenswrapper[5109]: I0217 00:16:00.948450 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29521456-hwxxh" event={"ID":"d5d6f33d-c80d-442d-9f91-d0162e328c59","Type":"ContainerStarted","Data":"d2069d2673bfe6b61c78c2318728eb3dfae190da6a30c966478c1461d05abc98"} Feb 17 00:16:03 crc kubenswrapper[5109]: I0217 00:16:03.969723 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29521456-hwxxh" event={"ID":"d5d6f33d-c80d-442d-9f91-d0162e328c59","Type":"ContainerStarted","Data":"605a180c05f6ed5f473b5bf1522a5fa3e95c8563bc2ca7b066ecebfedd9acb73"} Feb 17 00:16:03 crc kubenswrapper[5109]: I0217 00:16:03.996656 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29521456-hwxxh" podStartSLOduration=1.24749689 podStartE2EDuration="3.996631592s" podCreationTimestamp="2026-02-17 00:16:00 +0000 UTC" firstStartedPulling="2026-02-17 00:16:00.736704433 +0000 UTC m=+432.068259231" lastFinishedPulling="2026-02-17 00:16:03.485839165 +0000 UTC m=+434.817393933" observedRunningTime="2026-02-17 00:16:03.989979683 +0000 UTC m=+435.321534481" watchObservedRunningTime="2026-02-17 00:16:03.996631592 +0000 UTC m=+435.328186390" Feb 17 00:16:04 crc kubenswrapper[5109]: I0217 00:16:04.279836 5109 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-vcfd5" Feb 17 00:16:04 crc kubenswrapper[5109]: I0217 00:16:04.301163 5109 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-vcfd5" Feb 17 00:16:04 crc kubenswrapper[5109]: I0217 00:16:04.979966 5109 generic.go:358] "Generic (PLEG): container finished" podID="d5d6f33d-c80d-442d-9f91-d0162e328c59" containerID="605a180c05f6ed5f473b5bf1522a5fa3e95c8563bc2ca7b066ecebfedd9acb73" exitCode=0 Feb 17 00:16:04 crc kubenswrapper[5109]: I0217 00:16:04.980126 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29521456-hwxxh" event={"ID":"d5d6f33d-c80d-442d-9f91-d0162e328c59","Type":"ContainerDied","Data":"605a180c05f6ed5f473b5bf1522a5fa3e95c8563bc2ca7b066ecebfedd9acb73"} Feb 17 00:16:05 crc kubenswrapper[5109]: I0217 00:16:05.303068 5109 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-03-19 00:11:04 +0000 UTC" deadline="2026-03-10 06:26:04.96180532 +0000 UTC" Feb 17 00:16:05 crc kubenswrapper[5109]: I0217 00:16:05.303475 5109 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="510h9m59.658338652s" Feb 17 00:16:06 crc kubenswrapper[5109]: I0217 00:16:06.280113 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521456-hwxxh" Feb 17 00:16:06 crc kubenswrapper[5109]: I0217 00:16:06.307846 5109 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-03-19 00:11:04 +0000 UTC" deadline="2026-03-11 00:49:24.50802156 +0000 UTC" Feb 17 00:16:06 crc kubenswrapper[5109]: I0217 00:16:06.308170 5109 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="528h33m18.199859088s" Feb 17 00:16:06 crc kubenswrapper[5109]: I0217 00:16:06.408760 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfd72\" (UniqueName: \"kubernetes.io/projected/d5d6f33d-c80d-442d-9f91-d0162e328c59-kube-api-access-lfd72\") pod \"d5d6f33d-c80d-442d-9f91-d0162e328c59\" (UID: \"d5d6f33d-c80d-442d-9f91-d0162e328c59\") " Feb 17 00:16:06 crc kubenswrapper[5109]: I0217 00:16:06.418986 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5d6f33d-c80d-442d-9f91-d0162e328c59-kube-api-access-lfd72" (OuterVolumeSpecName: "kube-api-access-lfd72") pod "d5d6f33d-c80d-442d-9f91-d0162e328c59" (UID: "d5d6f33d-c80d-442d-9f91-d0162e328c59"). InnerVolumeSpecName "kube-api-access-lfd72". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:16:06 crc kubenswrapper[5109]: I0217 00:16:06.511311 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lfd72\" (UniqueName: \"kubernetes.io/projected/d5d6f33d-c80d-442d-9f91-d0162e328c59-kube-api-access-lfd72\") on node \"crc\" DevicePath \"\"" Feb 17 00:16:06 crc kubenswrapper[5109]: I0217 00:16:06.995306 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521456-hwxxh" Feb 17 00:16:06 crc kubenswrapper[5109]: I0217 00:16:06.995340 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29521456-hwxxh" event={"ID":"d5d6f33d-c80d-442d-9f91-d0162e328c59","Type":"ContainerDied","Data":"d2069d2673bfe6b61c78c2318728eb3dfae190da6a30c966478c1461d05abc98"} Feb 17 00:16:06 crc kubenswrapper[5109]: I0217 00:16:06.997289 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2069d2673bfe6b61c78c2318728eb3dfae190da6a30c966478c1461d05abc98" Feb 17 00:16:30 crc kubenswrapper[5109]: I0217 00:16:30.800250 5109 patch_prober.go:28] interesting pod/machine-config-daemon-hjvm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:16:30 crc kubenswrapper[5109]: I0217 00:16:30.800976 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" podUID="5867f26a-eddd-4d0b-bfa3-e7c68e976330" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:16:50 crc kubenswrapper[5109]: I0217 00:16:50.945913 5109 scope.go:117] "RemoveContainer" containerID="854ea946ed69699e59c5630202e48752d2e227d4c574ba5c9f267fcf2028141a" Feb 17 00:17:00 crc kubenswrapper[5109]: I0217 00:17:00.800250 5109 patch_prober.go:28] interesting pod/machine-config-daemon-hjvm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:17:00 crc kubenswrapper[5109]: I0217 00:17:00.802967 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" podUID="5867f26a-eddd-4d0b-bfa3-e7c68e976330" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:17:30 crc kubenswrapper[5109]: I0217 00:17:30.800491 5109 patch_prober.go:28] interesting pod/machine-config-daemon-hjvm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:17:30 crc kubenswrapper[5109]: I0217 00:17:30.803185 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" podUID="5867f26a-eddd-4d0b-bfa3-e7c68e976330" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:17:30 crc kubenswrapper[5109]: I0217 00:17:30.803418 5109 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" Feb 17 00:17:30 crc kubenswrapper[5109]: I0217 00:17:30.804423 5109 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ca47888da47c26cab83fcf442e46e7728fd6fee6d192dec983630c8c66aeda36"} pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 00:17:30 crc kubenswrapper[5109]: I0217 00:17:30.804752 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" podUID="5867f26a-eddd-4d0b-bfa3-e7c68e976330" containerName="machine-config-daemon" containerID="cri-o://ca47888da47c26cab83fcf442e46e7728fd6fee6d192dec983630c8c66aeda36" gracePeriod=600 Feb 17 00:17:31 crc kubenswrapper[5109]: I0217 00:17:31.624989 5109 generic.go:358] "Generic (PLEG): container finished" podID="5867f26a-eddd-4d0b-bfa3-e7c68e976330" containerID="ca47888da47c26cab83fcf442e46e7728fd6fee6d192dec983630c8c66aeda36" exitCode=0 Feb 17 00:17:31 crc kubenswrapper[5109]: I0217 00:17:31.625053 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" event={"ID":"5867f26a-eddd-4d0b-bfa3-e7c68e976330","Type":"ContainerDied","Data":"ca47888da47c26cab83fcf442e46e7728fd6fee6d192dec983630c8c66aeda36"} Feb 17 00:17:31 crc kubenswrapper[5109]: I0217 00:17:31.625845 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" event={"ID":"5867f26a-eddd-4d0b-bfa3-e7c68e976330","Type":"ContainerStarted","Data":"4240edc1c6cdc8427405aa2a8b83638ea6ac630ece6a4d9c0ad1bed15963e71f"} Feb 17 00:17:31 crc kubenswrapper[5109]: I0217 00:17:31.625873 5109 scope.go:117] "RemoveContainer" containerID="7981733834e5113824e0605b264ad8ffcb2706e3cea14ef7eaf54cb2b20e2859" Feb 17 00:18:00 crc kubenswrapper[5109]: I0217 00:18:00.147297 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29521458-7jmzl"] Feb 17 00:18:00 crc kubenswrapper[5109]: I0217 00:18:00.148902 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d5d6f33d-c80d-442d-9f91-d0162e328c59" containerName="oc" Feb 17 00:18:00 crc kubenswrapper[5109]: I0217 00:18:00.148923 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5d6f33d-c80d-442d-9f91-d0162e328c59" containerName="oc" Feb 17 00:18:00 crc kubenswrapper[5109]: I0217 00:18:00.149070 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="d5d6f33d-c80d-442d-9f91-d0162e328c59" containerName="oc" Feb 17 00:18:00 crc kubenswrapper[5109]: I0217 00:18:00.153104 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521458-7jmzl" Feb 17 00:18:00 crc kubenswrapper[5109]: I0217 00:18:00.154799 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29521458-7jmzl"] Feb 17 00:18:00 crc kubenswrapper[5109]: I0217 00:18:00.155274 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-r4lwp\"" Feb 17 00:18:00 crc kubenswrapper[5109]: I0217 00:18:00.156169 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 17 00:18:00 crc kubenswrapper[5109]: I0217 00:18:00.156300 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 17 00:18:00 crc kubenswrapper[5109]: I0217 00:18:00.183772 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxr4n\" (UniqueName: \"kubernetes.io/projected/216e9b1d-31c8-4016-9031-97f9f4ec879e-kube-api-access-vxr4n\") pod \"auto-csr-approver-29521458-7jmzl\" (UID: \"216e9b1d-31c8-4016-9031-97f9f4ec879e\") " pod="openshift-infra/auto-csr-approver-29521458-7jmzl" Feb 17 00:18:00 crc kubenswrapper[5109]: I0217 00:18:00.285051 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vxr4n\" (UniqueName: \"kubernetes.io/projected/216e9b1d-31c8-4016-9031-97f9f4ec879e-kube-api-access-vxr4n\") pod \"auto-csr-approver-29521458-7jmzl\" (UID: \"216e9b1d-31c8-4016-9031-97f9f4ec879e\") " pod="openshift-infra/auto-csr-approver-29521458-7jmzl" Feb 17 00:18:00 crc kubenswrapper[5109]: I0217 00:18:00.310529 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxr4n\" (UniqueName: \"kubernetes.io/projected/216e9b1d-31c8-4016-9031-97f9f4ec879e-kube-api-access-vxr4n\") pod \"auto-csr-approver-29521458-7jmzl\" (UID: \"216e9b1d-31c8-4016-9031-97f9f4ec879e\") " pod="openshift-infra/auto-csr-approver-29521458-7jmzl" Feb 17 00:18:00 crc kubenswrapper[5109]: I0217 00:18:00.474788 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521458-7jmzl" Feb 17 00:18:00 crc kubenswrapper[5109]: I0217 00:18:00.762140 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29521458-7jmzl"] Feb 17 00:18:00 crc kubenswrapper[5109]: I0217 00:18:00.816824 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29521458-7jmzl" event={"ID":"216e9b1d-31c8-4016-9031-97f9f4ec879e","Type":"ContainerStarted","Data":"d2849b1ad79a96881b122644c1df67aae509a65d4d3ef3039877423a3496cad0"} Feb 17 00:18:02 crc kubenswrapper[5109]: I0217 00:18:02.833292 5109 generic.go:358] "Generic (PLEG): container finished" podID="216e9b1d-31c8-4016-9031-97f9f4ec879e" containerID="866e4b63ca465084f236cc7cce4793015ec72987da5434eb27a3b081be1d9394" exitCode=0 Feb 17 00:18:02 crc kubenswrapper[5109]: I0217 00:18:02.833406 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29521458-7jmzl" event={"ID":"216e9b1d-31c8-4016-9031-97f9f4ec879e","Type":"ContainerDied","Data":"866e4b63ca465084f236cc7cce4793015ec72987da5434eb27a3b081be1d9394"} Feb 17 00:18:04 crc kubenswrapper[5109]: I0217 00:18:04.087033 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521458-7jmzl" Feb 17 00:18:04 crc kubenswrapper[5109]: I0217 00:18:04.137267 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxr4n\" (UniqueName: \"kubernetes.io/projected/216e9b1d-31c8-4016-9031-97f9f4ec879e-kube-api-access-vxr4n\") pod \"216e9b1d-31c8-4016-9031-97f9f4ec879e\" (UID: \"216e9b1d-31c8-4016-9031-97f9f4ec879e\") " Feb 17 00:18:04 crc kubenswrapper[5109]: I0217 00:18:04.149838 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/216e9b1d-31c8-4016-9031-97f9f4ec879e-kube-api-access-vxr4n" (OuterVolumeSpecName: "kube-api-access-vxr4n") pod "216e9b1d-31c8-4016-9031-97f9f4ec879e" (UID: "216e9b1d-31c8-4016-9031-97f9f4ec879e"). InnerVolumeSpecName "kube-api-access-vxr4n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:18:04 crc kubenswrapper[5109]: I0217 00:18:04.240686 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vxr4n\" (UniqueName: \"kubernetes.io/projected/216e9b1d-31c8-4016-9031-97f9f4ec879e-kube-api-access-vxr4n\") on node \"crc\" DevicePath \"\"" Feb 17 00:18:04 crc kubenswrapper[5109]: I0217 00:18:04.851520 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521458-7jmzl" Feb 17 00:18:04 crc kubenswrapper[5109]: I0217 00:18:04.851570 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29521458-7jmzl" event={"ID":"216e9b1d-31c8-4016-9031-97f9f4ec879e","Type":"ContainerDied","Data":"d2849b1ad79a96881b122644c1df67aae509a65d4d3ef3039877423a3496cad0"} Feb 17 00:18:04 crc kubenswrapper[5109]: I0217 00:18:04.851666 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2849b1ad79a96881b122644c1df67aae509a65d4d3ef3039877423a3496cad0" Feb 17 00:18:49 crc kubenswrapper[5109]: I0217 00:18:49.812499 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 17 00:18:49 crc kubenswrapper[5109]: I0217 00:18:49.818197 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.380393 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d9cml"] Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.384361 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d9cml" podUID="aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df" containerName="kube-rbac-proxy" containerID="cri-o://ea1ad12a61b9c4366fe0473f416a58533b1cca24e29cccc74cc9cc79de87cc1d" gracePeriod=30 Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.384472 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d9cml" podUID="aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df" containerName="ovnkube-cluster-manager" containerID="cri-o://14033009baab184f3748e60e4924fcc69138f6f45662537d53ff835ad32aa323" gracePeriod=30 Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.571303 5109 generic.go:358] "Generic (PLEG): container finished" podID="aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df" containerID="14033009baab184f3748e60e4924fcc69138f6f45662537d53ff835ad32aa323" exitCode=0 Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.571673 5109 generic.go:358] "Generic (PLEG): container finished" podID="aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df" containerID="ea1ad12a61b9c4366fe0473f416a58533b1cca24e29cccc74cc9cc79de87cc1d" exitCode=0 Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.571421 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d9cml" event={"ID":"aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df","Type":"ContainerDied","Data":"14033009baab184f3748e60e4924fcc69138f6f45662537d53ff835ad32aa323"} Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.571795 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d9cml" event={"ID":"aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df","Type":"ContainerDied","Data":"ea1ad12a61b9c4366fe0473f416a58533b1cca24e29cccc74cc9cc79de87cc1d"} Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.571812 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d9cml" event={"ID":"aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df","Type":"ContainerDied","Data":"f6f46e4f1b97c43f7c86a12a1a039b5937adc2498267cb2477c08c969954582a"} Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.571826 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6f46e4f1b97c43f7c86a12a1a039b5937adc2498267cb2477c08c969954582a" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.592376 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5wnz5"] Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.593409 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="ovn-controller" containerID="cri-o://25022737d2581a98213f0c7ec1277f45ff3981f42c3b92dcd2e74018c5b0eb7f" gracePeriod=30 Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.593583 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="kube-rbac-proxy-node" containerID="cri-o://625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59" gracePeriod=30 Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.593496 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="sbdb" containerID="cri-o://5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5" gracePeriod=30 Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.593520 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="northd" containerID="cri-o://3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444" gracePeriod=30 Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.593517 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea" gracePeriod=30 Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.593456 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="nbdb" containerID="cri-o://a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa" gracePeriod=30 Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.593626 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="ovn-acl-logging" containerID="cri-o://65879a7aa8750fbe27f901d53c17be1ef272c49e93f784c3aa12f6a56a9a71db" gracePeriod=30 Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.645376 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d9cml" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.656839 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="ovnkube-controller" containerID="cri-o://46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a" gracePeriod=30 Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.672788 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-bwlw7"] Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.673281 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="216e9b1d-31c8-4016-9031-97f9f4ec879e" containerName="oc" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.673300 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="216e9b1d-31c8-4016-9031-97f9f4ec879e" containerName="oc" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.673311 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df" containerName="ovnkube-cluster-manager" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.673319 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df" containerName="ovnkube-cluster-manager" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.673349 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df" containerName="kube-rbac-proxy" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.673356 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df" containerName="kube-rbac-proxy" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.673446 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df" containerName="ovnkube-cluster-manager" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.673459 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="216e9b1d-31c8-4016-9031-97f9f4ec879e" containerName="oc" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.673470 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df" containerName="kube-rbac-proxy" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.676761 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-bwlw7" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.839701 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df-ovnkube-config\") pod \"aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df\" (UID: \"aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df\") " Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.839748 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df-env-overrides\") pod \"aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df\" (UID: \"aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df\") " Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.839778 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7m8bp\" (UniqueName: \"kubernetes.io/projected/aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df-kube-api-access-7m8bp\") pod \"aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df\" (UID: \"aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df\") " Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.839809 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df-ovn-control-plane-metrics-cert\") pod \"aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df\" (UID: \"aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df\") " Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.839967 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7fb394af-ebca-4777-8c84-c17bbd1b6fd5-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-bwlw7\" (UID: \"7fb394af-ebca-4777-8c84-c17bbd1b6fd5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-bwlw7" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.840010 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7fb394af-ebca-4777-8c84-c17bbd1b6fd5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-bwlw7\" (UID: \"7fb394af-ebca-4777-8c84-c17bbd1b6fd5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-bwlw7" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.840039 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7fb394af-ebca-4777-8c84-c17bbd1b6fd5-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-bwlw7\" (UID: \"7fb394af-ebca-4777-8c84-c17bbd1b6fd5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-bwlw7" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.840056 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgkhk\" (UniqueName: \"kubernetes.io/projected/7fb394af-ebca-4777-8c84-c17bbd1b6fd5-kube-api-access-hgkhk\") pod \"ovnkube-control-plane-97c9b6c48-bwlw7\" (UID: \"7fb394af-ebca-4777-8c84-c17bbd1b6fd5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-bwlw7" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.840983 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df" (UID: "aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.841153 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df" (UID: "aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.857688 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df" (UID: "aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.875577 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df-kube-api-access-7m8bp" (OuterVolumeSpecName: "kube-api-access-7m8bp") pod "aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df" (UID: "aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df"). InnerVolumeSpecName "kube-api-access-7m8bp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.929937 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5wnz5_900bd7e9-9e0a-4472-9882-1a0b3e829007/ovn-acl-logging/0.log" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.930374 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5wnz5_900bd7e9-9e0a-4472-9882-1a0b3e829007/ovn-controller/0.log" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.930785 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.941695 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-run-ovn-kubernetes\") pod \"900bd7e9-9e0a-4472-9882-1a0b3e829007\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.941930 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "900bd7e9-9e0a-4472-9882-1a0b3e829007" (UID: "900bd7e9-9e0a-4472-9882-1a0b3e829007"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.941976 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-var-lib-cni-networks-ovn-kubernetes\") pod \"900bd7e9-9e0a-4472-9882-1a0b3e829007\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.942003 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-run-ovn\") pod \"900bd7e9-9e0a-4472-9882-1a0b3e829007\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.942031 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-run-netns\") pod \"900bd7e9-9e0a-4472-9882-1a0b3e829007\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.942057 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6jfd\" (UniqueName: \"kubernetes.io/projected/900bd7e9-9e0a-4472-9882-1a0b3e829007-kube-api-access-b6jfd\") pod \"900bd7e9-9e0a-4472-9882-1a0b3e829007\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.942081 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-run-systemd\") pod \"900bd7e9-9e0a-4472-9882-1a0b3e829007\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.942113 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-cni-netd\") pod \"900bd7e9-9e0a-4472-9882-1a0b3e829007\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.942148 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-slash\") pod \"900bd7e9-9e0a-4472-9882-1a0b3e829007\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.942181 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-cni-bin\") pod \"900bd7e9-9e0a-4472-9882-1a0b3e829007\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.942206 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-log-socket\") pod \"900bd7e9-9e0a-4472-9882-1a0b3e829007\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.942227 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/900bd7e9-9e0a-4472-9882-1a0b3e829007-ovnkube-config\") pod \"900bd7e9-9e0a-4472-9882-1a0b3e829007\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.942256 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-run-openvswitch\") pod \"900bd7e9-9e0a-4472-9882-1a0b3e829007\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.942274 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/900bd7e9-9e0a-4472-9882-1a0b3e829007-env-overrides\") pod \"900bd7e9-9e0a-4472-9882-1a0b3e829007\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.942300 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/900bd7e9-9e0a-4472-9882-1a0b3e829007-ovn-node-metrics-cert\") pod \"900bd7e9-9e0a-4472-9882-1a0b3e829007\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.942337 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-systemd-units\") pod \"900bd7e9-9e0a-4472-9882-1a0b3e829007\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.942351 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-var-lib-openvswitch\") pod \"900bd7e9-9e0a-4472-9882-1a0b3e829007\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.942387 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-etc-openvswitch\") pod \"900bd7e9-9e0a-4472-9882-1a0b3e829007\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.942410 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-node-log\") pod \"900bd7e9-9e0a-4472-9882-1a0b3e829007\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.942436 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/900bd7e9-9e0a-4472-9882-1a0b3e829007-ovnkube-script-lib\") pod \"900bd7e9-9e0a-4472-9882-1a0b3e829007\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.942452 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-kubelet\") pod \"900bd7e9-9e0a-4472-9882-1a0b3e829007\" (UID: \"900bd7e9-9e0a-4472-9882-1a0b3e829007\") " Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.942581 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7fb394af-ebca-4777-8c84-c17bbd1b6fd5-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-bwlw7\" (UID: \"7fb394af-ebca-4777-8c84-c17bbd1b6fd5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-bwlw7" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.942651 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7fb394af-ebca-4777-8c84-c17bbd1b6fd5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-bwlw7\" (UID: \"7fb394af-ebca-4777-8c84-c17bbd1b6fd5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-bwlw7" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.942687 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7fb394af-ebca-4777-8c84-c17bbd1b6fd5-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-bwlw7\" (UID: \"7fb394af-ebca-4777-8c84-c17bbd1b6fd5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-bwlw7" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.942708 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hgkhk\" (UniqueName: \"kubernetes.io/projected/7fb394af-ebca-4777-8c84-c17bbd1b6fd5-kube-api-access-hgkhk\") pod \"ovnkube-control-plane-97c9b6c48-bwlw7\" (UID: \"7fb394af-ebca-4777-8c84-c17bbd1b6fd5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-bwlw7" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.942769 5109 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.942782 5109 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.942792 5109 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.942802 5109 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.942812 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7m8bp\" (UniqueName: \"kubernetes.io/projected/aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df-kube-api-access-7m8bp\") on node \"crc\" DevicePath \"\"" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.942973 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "900bd7e9-9e0a-4472-9882-1a0b3e829007" (UID: "900bd7e9-9e0a-4472-9882-1a0b3e829007"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.942998 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "900bd7e9-9e0a-4472-9882-1a0b3e829007" (UID: "900bd7e9-9e0a-4472-9882-1a0b3e829007"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.943043 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-log-socket" (OuterVolumeSpecName: "log-socket") pod "900bd7e9-9e0a-4472-9882-1a0b3e829007" (UID: "900bd7e9-9e0a-4472-9882-1a0b3e829007"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.943084 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "900bd7e9-9e0a-4472-9882-1a0b3e829007" (UID: "900bd7e9-9e0a-4472-9882-1a0b3e829007"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.943198 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-slash" (OuterVolumeSpecName: "host-slash") pod "900bd7e9-9e0a-4472-9882-1a0b3e829007" (UID: "900bd7e9-9e0a-4472-9882-1a0b3e829007"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.943229 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "900bd7e9-9e0a-4472-9882-1a0b3e829007" (UID: "900bd7e9-9e0a-4472-9882-1a0b3e829007"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.943263 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "900bd7e9-9e0a-4472-9882-1a0b3e829007" (UID: "900bd7e9-9e0a-4472-9882-1a0b3e829007"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.943299 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-node-log" (OuterVolumeSpecName: "node-log") pod "900bd7e9-9e0a-4472-9882-1a0b3e829007" (UID: "900bd7e9-9e0a-4472-9882-1a0b3e829007"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.943413 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "900bd7e9-9e0a-4472-9882-1a0b3e829007" (UID: "900bd7e9-9e0a-4472-9882-1a0b3e829007"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.943773 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "900bd7e9-9e0a-4472-9882-1a0b3e829007" (UID: "900bd7e9-9e0a-4472-9882-1a0b3e829007"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.943988 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "900bd7e9-9e0a-4472-9882-1a0b3e829007" (UID: "900bd7e9-9e0a-4472-9882-1a0b3e829007"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.944052 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "900bd7e9-9e0a-4472-9882-1a0b3e829007" (UID: "900bd7e9-9e0a-4472-9882-1a0b3e829007"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.944080 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "900bd7e9-9e0a-4472-9882-1a0b3e829007" (UID: "900bd7e9-9e0a-4472-9882-1a0b3e829007"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.944128 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/900bd7e9-9e0a-4472-9882-1a0b3e829007-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "900bd7e9-9e0a-4472-9882-1a0b3e829007" (UID: "900bd7e9-9e0a-4472-9882-1a0b3e829007"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.944436 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/900bd7e9-9e0a-4472-9882-1a0b3e829007-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "900bd7e9-9e0a-4472-9882-1a0b3e829007" (UID: "900bd7e9-9e0a-4472-9882-1a0b3e829007"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.944478 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/900bd7e9-9e0a-4472-9882-1a0b3e829007-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "900bd7e9-9e0a-4472-9882-1a0b3e829007" (UID: "900bd7e9-9e0a-4472-9882-1a0b3e829007"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.944678 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7fb394af-ebca-4777-8c84-c17bbd1b6fd5-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-bwlw7\" (UID: \"7fb394af-ebca-4777-8c84-c17bbd1b6fd5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-bwlw7" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.945062 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7fb394af-ebca-4777-8c84-c17bbd1b6fd5-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-bwlw7\" (UID: \"7fb394af-ebca-4777-8c84-c17bbd1b6fd5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-bwlw7" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.947578 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7fb394af-ebca-4777-8c84-c17bbd1b6fd5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-bwlw7\" (UID: \"7fb394af-ebca-4777-8c84-c17bbd1b6fd5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-bwlw7" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.948346 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900bd7e9-9e0a-4472-9882-1a0b3e829007-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "900bd7e9-9e0a-4472-9882-1a0b3e829007" (UID: "900bd7e9-9e0a-4472-9882-1a0b3e829007"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.948489 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/900bd7e9-9e0a-4472-9882-1a0b3e829007-kube-api-access-b6jfd" (OuterVolumeSpecName: "kube-api-access-b6jfd") pod "900bd7e9-9e0a-4472-9882-1a0b3e829007" (UID: "900bd7e9-9e0a-4472-9882-1a0b3e829007"). InnerVolumeSpecName "kube-api-access-b6jfd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.957038 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "900bd7e9-9e0a-4472-9882-1a0b3e829007" (UID: "900bd7e9-9e0a-4472-9882-1a0b3e829007"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.960930 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgkhk\" (UniqueName: \"kubernetes.io/projected/7fb394af-ebca-4777-8c84-c17bbd1b6fd5-kube-api-access-hgkhk\") pod \"ovnkube-control-plane-97c9b6c48-bwlw7\" (UID: \"7fb394af-ebca-4777-8c84-c17bbd1b6fd5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-bwlw7" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.984217 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rv26r"] Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.985129 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.985223 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.985312 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="ovn-controller" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.985395 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="ovn-controller" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.985474 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="kubecfg-setup" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.985558 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="kubecfg-setup" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.985667 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="ovnkube-controller" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.985750 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="ovnkube-controller" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.985833 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="nbdb" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.985899 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="nbdb" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.985969 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="sbdb" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.986039 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="sbdb" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.986112 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="kube-rbac-proxy-node" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.986177 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="kube-rbac-proxy-node" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.986257 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="ovn-acl-logging" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.986323 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="ovn-acl-logging" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.986393 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="northd" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.986465 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="northd" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.986650 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="kube-rbac-proxy-node" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.986746 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="ovn-acl-logging" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.986824 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="ovn-controller" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.986904 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="sbdb" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.987002 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="nbdb" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.987076 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.987155 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="northd" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.987230 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerName="ovnkube-controller" Feb 17 00:19:53 crc kubenswrapper[5109]: I0217 00:19:53.994807 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.044083 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-host-cni-bin\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.044148 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8c329d85-d26e-437f-82f4-4f8a12e54d44-ovnkube-script-lib\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.044189 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-systemd-units\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.044257 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-host-run-netns\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.044295 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-var-lib-openvswitch\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.044370 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c329d85-d26e-437f-82f4-4f8a12e54d44-env-overrides\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.044407 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-run-ovn\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.044440 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-node-log\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.044530 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-log-socket\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.044663 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-host-cni-netd\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.044717 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-host-kubelet\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.044749 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-run-systemd\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.044794 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c329d85-d26e-437f-82f4-4f8a12e54d44-ovnkube-config\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.044853 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-host-slash\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.044890 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c329d85-d26e-437f-82f4-4f8a12e54d44-ovn-node-metrics-cert\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.044935 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.044996 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwtdp\" (UniqueName: \"kubernetes.io/projected/8c329d85-d26e-437f-82f4-4f8a12e54d44-kube-api-access-bwtdp\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.045072 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-host-run-ovn-kubernetes\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.045168 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-etc-openvswitch\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.045264 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-run-openvswitch\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.045389 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b6jfd\" (UniqueName: \"kubernetes.io/projected/900bd7e9-9e0a-4472-9882-1a0b3e829007-kube-api-access-b6jfd\") on node \"crc\" DevicePath \"\"" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.045424 5109 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.045443 5109 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.045463 5109 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-slash\") on node \"crc\" DevicePath \"\"" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.045480 5109 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.045497 5109 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-log-socket\") on node \"crc\" DevicePath \"\"" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.045515 5109 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/900bd7e9-9e0a-4472-9882-1a0b3e829007-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.045535 5109 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.045552 5109 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/900bd7e9-9e0a-4472-9882-1a0b3e829007-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.045570 5109 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/900bd7e9-9e0a-4472-9882-1a0b3e829007-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.045587 5109 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.045626 5109 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.045642 5109 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.045660 5109 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-node-log\") on node \"crc\" DevicePath \"\"" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.045677 5109 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/900bd7e9-9e0a-4472-9882-1a0b3e829007-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.045694 5109 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.045712 5109 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.045730 5109 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.045749 5109 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/900bd7e9-9e0a-4472-9882-1a0b3e829007-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.053735 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-bwlw7" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.082761 5109 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.146778 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bwtdp\" (UniqueName: \"kubernetes.io/projected/8c329d85-d26e-437f-82f4-4f8a12e54d44-kube-api-access-bwtdp\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.147116 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-host-run-ovn-kubernetes\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.147203 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-host-run-ovn-kubernetes\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.147320 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-etc-openvswitch\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.147434 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-etc-openvswitch\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.147533 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-run-openvswitch\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.147569 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-run-openvswitch\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.147753 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-host-cni-bin\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.147947 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8c329d85-d26e-437f-82f4-4f8a12e54d44-ovnkube-script-lib\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.148058 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-systemd-units\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.148153 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-host-run-netns\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.148252 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-var-lib-openvswitch\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.148436 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c329d85-d26e-437f-82f4-4f8a12e54d44-env-overrides\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.148561 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-run-ovn\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.148703 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-node-log\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.148778 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-node-log\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.147861 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-host-cni-bin\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.148272 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-host-run-netns\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.148647 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-run-ovn\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.148152 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-systemd-units\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.148333 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-var-lib-openvswitch\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.149102 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8c329d85-d26e-437f-82f4-4f8a12e54d44-ovnkube-script-lib\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.149202 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-log-socket\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.149279 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-log-socket\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.149341 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c329d85-d26e-437f-82f4-4f8a12e54d44-env-overrides\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.149442 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-host-cni-netd\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.149524 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-host-kubelet\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.149663 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-run-systemd\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.149823 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c329d85-d26e-437f-82f4-4f8a12e54d44-ovnkube-config\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.150043 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-host-slash\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.149894 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-host-kubelet\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.149953 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-run-systemd\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.150123 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c329d85-d26e-437f-82f4-4f8a12e54d44-ovn-node-metrics-cert\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.149913 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-host-cni-netd\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.150458 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c329d85-d26e-437f-82f4-4f8a12e54d44-ovnkube-config\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.150630 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-host-slash\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.150874 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.150968 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c329d85-d26e-437f-82f4-4f8a12e54d44-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.157813 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c329d85-d26e-437f-82f4-4f8a12e54d44-ovn-node-metrics-cert\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.177201 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwtdp\" (UniqueName: \"kubernetes.io/projected/8c329d85-d26e-437f-82f4-4f8a12e54d44-kube-api-access-bwtdp\") pod \"ovnkube-node-rv26r\" (UID: \"8c329d85-d26e-437f-82f4-4f8a12e54d44\") " pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.317064 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:19:54 crc kubenswrapper[5109]: W0217 00:19:54.334387 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c329d85_d26e_437f_82f4_4f8a12e54d44.slice/crio-9408385dba7f25b72e61969b655a1332a4695892da4b1fa89df4fd0b57cc5e04 WatchSource:0}: Error finding container 9408385dba7f25b72e61969b655a1332a4695892da4b1fa89df4fd0b57cc5e04: Status 404 returned error can't find the container with id 9408385dba7f25b72e61969b655a1332a4695892da4b1fa89df4fd0b57cc5e04 Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.589823 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-bwlw7" event={"ID":"7fb394af-ebca-4777-8c84-c17bbd1b6fd5","Type":"ContainerStarted","Data":"f7fbd0a5f02f7777a322010ea59b7220cef1b75c2bfa02a754bfa2ad10b4c6c2"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.589905 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-bwlw7" event={"ID":"7fb394af-ebca-4777-8c84-c17bbd1b6fd5","Type":"ContainerStarted","Data":"a1d2331b906b8c48d2191a2a1f7304e5d042c3b20728993b8b2ce39faad64270"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.592202 5109 generic.go:358] "Generic (PLEG): container finished" podID="8c329d85-d26e-437f-82f4-4f8a12e54d44" containerID="2cafa7109de80bafa84b5801225af818697b29e8bd5e3318bcf8f34a0012db17" exitCode=0 Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.592381 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" event={"ID":"8c329d85-d26e-437f-82f4-4f8a12e54d44","Type":"ContainerDied","Data":"2cafa7109de80bafa84b5801225af818697b29e8bd5e3318bcf8f34a0012db17"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.592447 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" event={"ID":"8c329d85-d26e-437f-82f4-4f8a12e54d44","Type":"ContainerStarted","Data":"9408385dba7f25b72e61969b655a1332a4695892da4b1fa89df4fd0b57cc5e04"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.602670 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5wnz5_900bd7e9-9e0a-4472-9882-1a0b3e829007/ovn-acl-logging/0.log" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603196 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5wnz5_900bd7e9-9e0a-4472-9882-1a0b3e829007/ovn-controller/0.log" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603489 5109 generic.go:358] "Generic (PLEG): container finished" podID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerID="46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a" exitCode=0 Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603509 5109 generic.go:358] "Generic (PLEG): container finished" podID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerID="5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5" exitCode=0 Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603515 5109 generic.go:358] "Generic (PLEG): container finished" podID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerID="a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa" exitCode=0 Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603521 5109 generic.go:358] "Generic (PLEG): container finished" podID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerID="3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444" exitCode=0 Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603528 5109 generic.go:358] "Generic (PLEG): container finished" podID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerID="4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea" exitCode=0 Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603534 5109 generic.go:358] "Generic (PLEG): container finished" podID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerID="625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59" exitCode=0 Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603540 5109 generic.go:358] "Generic (PLEG): container finished" podID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerID="65879a7aa8750fbe27f901d53c17be1ef272c49e93f784c3aa12f6a56a9a71db" exitCode=143 Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603546 5109 generic.go:358] "Generic (PLEG): container finished" podID="900bd7e9-9e0a-4472-9882-1a0b3e829007" containerID="25022737d2581a98213f0c7ec1277f45ff3981f42c3b92dcd2e74018c5b0eb7f" exitCode=143 Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603639 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" event={"ID":"900bd7e9-9e0a-4472-9882-1a0b3e829007","Type":"ContainerDied","Data":"46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603663 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" event={"ID":"900bd7e9-9e0a-4472-9882-1a0b3e829007","Type":"ContainerDied","Data":"5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603675 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" event={"ID":"900bd7e9-9e0a-4472-9882-1a0b3e829007","Type":"ContainerDied","Data":"a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603689 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" event={"ID":"900bd7e9-9e0a-4472-9882-1a0b3e829007","Type":"ContainerDied","Data":"3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603697 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" event={"ID":"900bd7e9-9e0a-4472-9882-1a0b3e829007","Type":"ContainerDied","Data":"4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603705 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" event={"ID":"900bd7e9-9e0a-4472-9882-1a0b3e829007","Type":"ContainerDied","Data":"625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603716 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"65879a7aa8750fbe27f901d53c17be1ef272c49e93f784c3aa12f6a56a9a71db"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603723 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25022737d2581a98213f0c7ec1277f45ff3981f42c3b92dcd2e74018c5b0eb7f"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603728 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a999b316c34b2141d21782714543fb68e71868aa533f8d62258829bda33ce87f"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603734 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" event={"ID":"900bd7e9-9e0a-4472-9882-1a0b3e829007","Type":"ContainerDied","Data":"65879a7aa8750fbe27f901d53c17be1ef272c49e93f784c3aa12f6a56a9a71db"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603740 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603745 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603750 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603755 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603759 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603764 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603769 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"65879a7aa8750fbe27f901d53c17be1ef272c49e93f784c3aa12f6a56a9a71db"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603773 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25022737d2581a98213f0c7ec1277f45ff3981f42c3b92dcd2e74018c5b0eb7f"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603777 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a999b316c34b2141d21782714543fb68e71868aa533f8d62258829bda33ce87f"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603784 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" event={"ID":"900bd7e9-9e0a-4472-9882-1a0b3e829007","Type":"ContainerDied","Data":"25022737d2581a98213f0c7ec1277f45ff3981f42c3b92dcd2e74018c5b0eb7f"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603790 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603795 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603801 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603806 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603810 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603815 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603820 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"65879a7aa8750fbe27f901d53c17be1ef272c49e93f784c3aa12f6a56a9a71db"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603824 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25022737d2581a98213f0c7ec1277f45ff3981f42c3b92dcd2e74018c5b0eb7f"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603829 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a999b316c34b2141d21782714543fb68e71868aa533f8d62258829bda33ce87f"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603836 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" event={"ID":"900bd7e9-9e0a-4472-9882-1a0b3e829007","Type":"ContainerDied","Data":"70b5a3895f34cadc5c19731a662f705a995c5a83c7a31f0b2f61c4a9a7cc83f6"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603843 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603848 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603853 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603857 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603862 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603866 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603871 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"65879a7aa8750fbe27f901d53c17be1ef272c49e93f784c3aa12f6a56a9a71db"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603878 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25022737d2581a98213f0c7ec1277f45ff3981f42c3b92dcd2e74018c5b0eb7f"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603882 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a999b316c34b2141d21782714543fb68e71868aa533f8d62258829bda33ce87f"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.603894 5109 scope.go:117] "RemoveContainer" containerID="46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.604214 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5wnz5" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.607338 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bbh4j_a1a466bd-accd-4381-b1f0-357d6e20410e/kube-multus/0.log" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.607417 5109 generic.go:358] "Generic (PLEG): container finished" podID="a1a466bd-accd-4381-b1f0-357d6e20410e" containerID="db4ea6daea6acf078ea6b5f81ae1a7478dee8360368d4c8db797447141483453" exitCode=2 Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.607524 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bbh4j" event={"ID":"a1a466bd-accd-4381-b1f0-357d6e20410e","Type":"ContainerDied","Data":"db4ea6daea6acf078ea6b5f81ae1a7478dee8360368d4c8db797447141483453"} Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.607587 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d9cml" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.608527 5109 scope.go:117] "RemoveContainer" containerID="db4ea6daea6acf078ea6b5f81ae1a7478dee8360368d4c8db797447141483453" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.642619 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5wnz5"] Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.643530 5109 scope.go:117] "RemoveContainer" containerID="5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.647765 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5wnz5"] Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.676011 5109 scope.go:117] "RemoveContainer" containerID="a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.680539 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d9cml"] Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.683381 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-d9cml"] Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.719679 5109 scope.go:117] "RemoveContainer" containerID="3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.740639 5109 scope.go:117] "RemoveContainer" containerID="4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.766947 5109 scope.go:117] "RemoveContainer" containerID="625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.786474 5109 scope.go:117] "RemoveContainer" containerID="65879a7aa8750fbe27f901d53c17be1ef272c49e93f784c3aa12f6a56a9a71db" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.820860 5109 scope.go:117] "RemoveContainer" containerID="25022737d2581a98213f0c7ec1277f45ff3981f42c3b92dcd2e74018c5b0eb7f" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.837438 5109 scope.go:117] "RemoveContainer" containerID="a999b316c34b2141d21782714543fb68e71868aa533f8d62258829bda33ce87f" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.853505 5109 scope.go:117] "RemoveContainer" containerID="46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a" Feb 17 00:19:54 crc kubenswrapper[5109]: E0217 00:19:54.853969 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a\": container with ID starting with 46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a not found: ID does not exist" containerID="46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.854012 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a"} err="failed to get container status \"46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a\": rpc error: code = NotFound desc = could not find container \"46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a\": container with ID starting with 46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.854042 5109 scope.go:117] "RemoveContainer" containerID="5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5" Feb 17 00:19:54 crc kubenswrapper[5109]: E0217 00:19:54.854375 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5\": container with ID starting with 5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5 not found: ID does not exist" containerID="5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.854419 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5"} err="failed to get container status \"5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5\": rpc error: code = NotFound desc = could not find container \"5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5\": container with ID starting with 5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5 not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.854438 5109 scope.go:117] "RemoveContainer" containerID="a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa" Feb 17 00:19:54 crc kubenswrapper[5109]: E0217 00:19:54.854669 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa\": container with ID starting with a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa not found: ID does not exist" containerID="a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.854699 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa"} err="failed to get container status \"a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa\": rpc error: code = NotFound desc = could not find container \"a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa\": container with ID starting with a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.854717 5109 scope.go:117] "RemoveContainer" containerID="3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444" Feb 17 00:19:54 crc kubenswrapper[5109]: E0217 00:19:54.854935 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444\": container with ID starting with 3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444 not found: ID does not exist" containerID="3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.854973 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444"} err="failed to get container status \"3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444\": rpc error: code = NotFound desc = could not find container \"3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444\": container with ID starting with 3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444 not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.854986 5109 scope.go:117] "RemoveContainer" containerID="4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea" Feb 17 00:19:54 crc kubenswrapper[5109]: E0217 00:19:54.855201 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea\": container with ID starting with 4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea not found: ID does not exist" containerID="4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.855228 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea"} err="failed to get container status \"4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea\": rpc error: code = NotFound desc = could not find container \"4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea\": container with ID starting with 4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.855244 5109 scope.go:117] "RemoveContainer" containerID="625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59" Feb 17 00:19:54 crc kubenswrapper[5109]: E0217 00:19:54.855483 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59\": container with ID starting with 625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59 not found: ID does not exist" containerID="625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.855525 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59"} err="failed to get container status \"625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59\": rpc error: code = NotFound desc = could not find container \"625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59\": container with ID starting with 625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59 not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.855550 5109 scope.go:117] "RemoveContainer" containerID="65879a7aa8750fbe27f901d53c17be1ef272c49e93f784c3aa12f6a56a9a71db" Feb 17 00:19:54 crc kubenswrapper[5109]: E0217 00:19:54.855847 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65879a7aa8750fbe27f901d53c17be1ef272c49e93f784c3aa12f6a56a9a71db\": container with ID starting with 65879a7aa8750fbe27f901d53c17be1ef272c49e93f784c3aa12f6a56a9a71db not found: ID does not exist" containerID="65879a7aa8750fbe27f901d53c17be1ef272c49e93f784c3aa12f6a56a9a71db" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.855873 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65879a7aa8750fbe27f901d53c17be1ef272c49e93f784c3aa12f6a56a9a71db"} err="failed to get container status \"65879a7aa8750fbe27f901d53c17be1ef272c49e93f784c3aa12f6a56a9a71db\": rpc error: code = NotFound desc = could not find container \"65879a7aa8750fbe27f901d53c17be1ef272c49e93f784c3aa12f6a56a9a71db\": container with ID starting with 65879a7aa8750fbe27f901d53c17be1ef272c49e93f784c3aa12f6a56a9a71db not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.855891 5109 scope.go:117] "RemoveContainer" containerID="25022737d2581a98213f0c7ec1277f45ff3981f42c3b92dcd2e74018c5b0eb7f" Feb 17 00:19:54 crc kubenswrapper[5109]: E0217 00:19:54.856151 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25022737d2581a98213f0c7ec1277f45ff3981f42c3b92dcd2e74018c5b0eb7f\": container with ID starting with 25022737d2581a98213f0c7ec1277f45ff3981f42c3b92dcd2e74018c5b0eb7f not found: ID does not exist" containerID="25022737d2581a98213f0c7ec1277f45ff3981f42c3b92dcd2e74018c5b0eb7f" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.856174 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25022737d2581a98213f0c7ec1277f45ff3981f42c3b92dcd2e74018c5b0eb7f"} err="failed to get container status \"25022737d2581a98213f0c7ec1277f45ff3981f42c3b92dcd2e74018c5b0eb7f\": rpc error: code = NotFound desc = could not find container \"25022737d2581a98213f0c7ec1277f45ff3981f42c3b92dcd2e74018c5b0eb7f\": container with ID starting with 25022737d2581a98213f0c7ec1277f45ff3981f42c3b92dcd2e74018c5b0eb7f not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.856187 5109 scope.go:117] "RemoveContainer" containerID="a999b316c34b2141d21782714543fb68e71868aa533f8d62258829bda33ce87f" Feb 17 00:19:54 crc kubenswrapper[5109]: E0217 00:19:54.856417 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a999b316c34b2141d21782714543fb68e71868aa533f8d62258829bda33ce87f\": container with ID starting with a999b316c34b2141d21782714543fb68e71868aa533f8d62258829bda33ce87f not found: ID does not exist" containerID="a999b316c34b2141d21782714543fb68e71868aa533f8d62258829bda33ce87f" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.856440 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a999b316c34b2141d21782714543fb68e71868aa533f8d62258829bda33ce87f"} err="failed to get container status \"a999b316c34b2141d21782714543fb68e71868aa533f8d62258829bda33ce87f\": rpc error: code = NotFound desc = could not find container \"a999b316c34b2141d21782714543fb68e71868aa533f8d62258829bda33ce87f\": container with ID starting with a999b316c34b2141d21782714543fb68e71868aa533f8d62258829bda33ce87f not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.856456 5109 scope.go:117] "RemoveContainer" containerID="46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.856652 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a"} err="failed to get container status \"46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a\": rpc error: code = NotFound desc = could not find container \"46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a\": container with ID starting with 46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.856678 5109 scope.go:117] "RemoveContainer" containerID="5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.856853 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5"} err="failed to get container status \"5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5\": rpc error: code = NotFound desc = could not find container \"5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5\": container with ID starting with 5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5 not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.856877 5109 scope.go:117] "RemoveContainer" containerID="a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.857051 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa"} err="failed to get container status \"a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa\": rpc error: code = NotFound desc = could not find container \"a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa\": container with ID starting with a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.857085 5109 scope.go:117] "RemoveContainer" containerID="3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.857304 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444"} err="failed to get container status \"3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444\": rpc error: code = NotFound desc = could not find container \"3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444\": container with ID starting with 3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444 not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.857324 5109 scope.go:117] "RemoveContainer" containerID="4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.857533 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea"} err="failed to get container status \"4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea\": rpc error: code = NotFound desc = could not find container \"4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea\": container with ID starting with 4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.857565 5109 scope.go:117] "RemoveContainer" containerID="625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.857810 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59"} err="failed to get container status \"625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59\": rpc error: code = NotFound desc = could not find container \"625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59\": container with ID starting with 625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59 not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.857831 5109 scope.go:117] "RemoveContainer" containerID="65879a7aa8750fbe27f901d53c17be1ef272c49e93f784c3aa12f6a56a9a71db" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.858168 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65879a7aa8750fbe27f901d53c17be1ef272c49e93f784c3aa12f6a56a9a71db"} err="failed to get container status \"65879a7aa8750fbe27f901d53c17be1ef272c49e93f784c3aa12f6a56a9a71db\": rpc error: code = NotFound desc = could not find container \"65879a7aa8750fbe27f901d53c17be1ef272c49e93f784c3aa12f6a56a9a71db\": container with ID starting with 65879a7aa8750fbe27f901d53c17be1ef272c49e93f784c3aa12f6a56a9a71db not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.858185 5109 scope.go:117] "RemoveContainer" containerID="25022737d2581a98213f0c7ec1277f45ff3981f42c3b92dcd2e74018c5b0eb7f" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.858391 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25022737d2581a98213f0c7ec1277f45ff3981f42c3b92dcd2e74018c5b0eb7f"} err="failed to get container status \"25022737d2581a98213f0c7ec1277f45ff3981f42c3b92dcd2e74018c5b0eb7f\": rpc error: code = NotFound desc = could not find container \"25022737d2581a98213f0c7ec1277f45ff3981f42c3b92dcd2e74018c5b0eb7f\": container with ID starting with 25022737d2581a98213f0c7ec1277f45ff3981f42c3b92dcd2e74018c5b0eb7f not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.858423 5109 scope.go:117] "RemoveContainer" containerID="a999b316c34b2141d21782714543fb68e71868aa533f8d62258829bda33ce87f" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.858620 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a999b316c34b2141d21782714543fb68e71868aa533f8d62258829bda33ce87f"} err="failed to get container status \"a999b316c34b2141d21782714543fb68e71868aa533f8d62258829bda33ce87f\": rpc error: code = NotFound desc = could not find container \"a999b316c34b2141d21782714543fb68e71868aa533f8d62258829bda33ce87f\": container with ID starting with a999b316c34b2141d21782714543fb68e71868aa533f8d62258829bda33ce87f not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.858643 5109 scope.go:117] "RemoveContainer" containerID="46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.858813 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a"} err="failed to get container status \"46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a\": rpc error: code = NotFound desc = could not find container \"46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a\": container with ID starting with 46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.858831 5109 scope.go:117] "RemoveContainer" containerID="5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.859088 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5"} err="failed to get container status \"5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5\": rpc error: code = NotFound desc = could not find container \"5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5\": container with ID starting with 5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5 not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.859125 5109 scope.go:117] "RemoveContainer" containerID="a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.859380 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa"} err="failed to get container status \"a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa\": rpc error: code = NotFound desc = could not find container \"a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa\": container with ID starting with a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.859414 5109 scope.go:117] "RemoveContainer" containerID="3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.859683 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444"} err="failed to get container status \"3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444\": rpc error: code = NotFound desc = could not find container \"3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444\": container with ID starting with 3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444 not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.859703 5109 scope.go:117] "RemoveContainer" containerID="4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.859908 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea"} err="failed to get container status \"4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea\": rpc error: code = NotFound desc = could not find container \"4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea\": container with ID starting with 4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.859940 5109 scope.go:117] "RemoveContainer" containerID="625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.860170 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59"} err="failed to get container status \"625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59\": rpc error: code = NotFound desc = could not find container \"625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59\": container with ID starting with 625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59 not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.860193 5109 scope.go:117] "RemoveContainer" containerID="65879a7aa8750fbe27f901d53c17be1ef272c49e93f784c3aa12f6a56a9a71db" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.860499 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65879a7aa8750fbe27f901d53c17be1ef272c49e93f784c3aa12f6a56a9a71db"} err="failed to get container status \"65879a7aa8750fbe27f901d53c17be1ef272c49e93f784c3aa12f6a56a9a71db\": rpc error: code = NotFound desc = could not find container \"65879a7aa8750fbe27f901d53c17be1ef272c49e93f784c3aa12f6a56a9a71db\": container with ID starting with 65879a7aa8750fbe27f901d53c17be1ef272c49e93f784c3aa12f6a56a9a71db not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.860527 5109 scope.go:117] "RemoveContainer" containerID="25022737d2581a98213f0c7ec1277f45ff3981f42c3b92dcd2e74018c5b0eb7f" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.860848 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25022737d2581a98213f0c7ec1277f45ff3981f42c3b92dcd2e74018c5b0eb7f"} err="failed to get container status \"25022737d2581a98213f0c7ec1277f45ff3981f42c3b92dcd2e74018c5b0eb7f\": rpc error: code = NotFound desc = could not find container \"25022737d2581a98213f0c7ec1277f45ff3981f42c3b92dcd2e74018c5b0eb7f\": container with ID starting with 25022737d2581a98213f0c7ec1277f45ff3981f42c3b92dcd2e74018c5b0eb7f not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.860870 5109 scope.go:117] "RemoveContainer" containerID="a999b316c34b2141d21782714543fb68e71868aa533f8d62258829bda33ce87f" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.861073 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a999b316c34b2141d21782714543fb68e71868aa533f8d62258829bda33ce87f"} err="failed to get container status \"a999b316c34b2141d21782714543fb68e71868aa533f8d62258829bda33ce87f\": rpc error: code = NotFound desc = could not find container \"a999b316c34b2141d21782714543fb68e71868aa533f8d62258829bda33ce87f\": container with ID starting with a999b316c34b2141d21782714543fb68e71868aa533f8d62258829bda33ce87f not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.861093 5109 scope.go:117] "RemoveContainer" containerID="46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.861390 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a"} err="failed to get container status \"46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a\": rpc error: code = NotFound desc = could not find container \"46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a\": container with ID starting with 46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.861411 5109 scope.go:117] "RemoveContainer" containerID="5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.861568 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5"} err="failed to get container status \"5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5\": rpc error: code = NotFound desc = could not find container \"5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5\": container with ID starting with 5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5 not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.861604 5109 scope.go:117] "RemoveContainer" containerID="a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.861793 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa"} err="failed to get container status \"a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa\": rpc error: code = NotFound desc = could not find container \"a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa\": container with ID starting with a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.861814 5109 scope.go:117] "RemoveContainer" containerID="3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.862030 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444"} err="failed to get container status \"3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444\": rpc error: code = NotFound desc = could not find container \"3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444\": container with ID starting with 3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444 not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.862057 5109 scope.go:117] "RemoveContainer" containerID="4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.862387 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea"} err="failed to get container status \"4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea\": rpc error: code = NotFound desc = could not find container \"4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea\": container with ID starting with 4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.862406 5109 scope.go:117] "RemoveContainer" containerID="625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.862679 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59"} err="failed to get container status \"625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59\": rpc error: code = NotFound desc = could not find container \"625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59\": container with ID starting with 625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59 not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.862708 5109 scope.go:117] "RemoveContainer" containerID="65879a7aa8750fbe27f901d53c17be1ef272c49e93f784c3aa12f6a56a9a71db" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.862981 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65879a7aa8750fbe27f901d53c17be1ef272c49e93f784c3aa12f6a56a9a71db"} err="failed to get container status \"65879a7aa8750fbe27f901d53c17be1ef272c49e93f784c3aa12f6a56a9a71db\": rpc error: code = NotFound desc = could not find container \"65879a7aa8750fbe27f901d53c17be1ef272c49e93f784c3aa12f6a56a9a71db\": container with ID starting with 65879a7aa8750fbe27f901d53c17be1ef272c49e93f784c3aa12f6a56a9a71db not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.863019 5109 scope.go:117] "RemoveContainer" containerID="25022737d2581a98213f0c7ec1277f45ff3981f42c3b92dcd2e74018c5b0eb7f" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.863216 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25022737d2581a98213f0c7ec1277f45ff3981f42c3b92dcd2e74018c5b0eb7f"} err="failed to get container status \"25022737d2581a98213f0c7ec1277f45ff3981f42c3b92dcd2e74018c5b0eb7f\": rpc error: code = NotFound desc = could not find container \"25022737d2581a98213f0c7ec1277f45ff3981f42c3b92dcd2e74018c5b0eb7f\": container with ID starting with 25022737d2581a98213f0c7ec1277f45ff3981f42c3b92dcd2e74018c5b0eb7f not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.863232 5109 scope.go:117] "RemoveContainer" containerID="a999b316c34b2141d21782714543fb68e71868aa533f8d62258829bda33ce87f" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.863458 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a999b316c34b2141d21782714543fb68e71868aa533f8d62258829bda33ce87f"} err="failed to get container status \"a999b316c34b2141d21782714543fb68e71868aa533f8d62258829bda33ce87f\": rpc error: code = NotFound desc = could not find container \"a999b316c34b2141d21782714543fb68e71868aa533f8d62258829bda33ce87f\": container with ID starting with a999b316c34b2141d21782714543fb68e71868aa533f8d62258829bda33ce87f not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.863479 5109 scope.go:117] "RemoveContainer" containerID="46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.863735 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a"} err="failed to get container status \"46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a\": rpc error: code = NotFound desc = could not find container \"46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a\": container with ID starting with 46dd47604eb1af02e630bedb4cec7d704772a5868fdb7dc4c5ab215ce369aa2a not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.863755 5109 scope.go:117] "RemoveContainer" containerID="5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.863985 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5"} err="failed to get container status \"5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5\": rpc error: code = NotFound desc = could not find container \"5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5\": container with ID starting with 5a7642554b01c752ac6e04c4aad50dbbe6a19df80e802a571399af0f741efab5 not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.863997 5109 scope.go:117] "RemoveContainer" containerID="a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.864194 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa"} err="failed to get container status \"a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa\": rpc error: code = NotFound desc = could not find container \"a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa\": container with ID starting with a1b39019de3fb7c3abf86a6374a77499a741067524cd44b18d61a436d6a4f8fa not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.864216 5109 scope.go:117] "RemoveContainer" containerID="3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.864435 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444"} err="failed to get container status \"3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444\": rpc error: code = NotFound desc = could not find container \"3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444\": container with ID starting with 3030416b010f5d49706807817c2e1a1212d9650abf90e2259f8e12e0a13bd444 not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.864455 5109 scope.go:117] "RemoveContainer" containerID="4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.864687 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea"} err="failed to get container status \"4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea\": rpc error: code = NotFound desc = could not find container \"4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea\": container with ID starting with 4a8c76d349e7ab01163d344a40864415a86737f5a64ee1e93c945d011a0bbeea not found: ID does not exist" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.864722 5109 scope.go:117] "RemoveContainer" containerID="625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59" Feb 17 00:19:54 crc kubenswrapper[5109]: I0217 00:19:54.865011 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59"} err="failed to get container status \"625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59\": rpc error: code = NotFound desc = could not find container \"625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59\": container with ID starting with 625d82c50cf4f7dca3a5ac3bbee80fe20c1d2d41f8602ed12434a70eb1ef1f59 not found: ID does not exist" Feb 17 00:19:55 crc kubenswrapper[5109]: I0217 00:19:55.473140 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="900bd7e9-9e0a-4472-9882-1a0b3e829007" path="/var/lib/kubelet/pods/900bd7e9-9e0a-4472-9882-1a0b3e829007/volumes" Feb 17 00:19:55 crc kubenswrapper[5109]: I0217 00:19:55.475343 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df" path="/var/lib/kubelet/pods/aafc76cf-00d7-4cc8-a0f1-4f5d1705c9df/volumes" Feb 17 00:19:55 crc kubenswrapper[5109]: I0217 00:19:55.617114 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bbh4j_a1a466bd-accd-4381-b1f0-357d6e20410e/kube-multus/0.log" Feb 17 00:19:55 crc kubenswrapper[5109]: I0217 00:19:55.617884 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bbh4j" event={"ID":"a1a466bd-accd-4381-b1f0-357d6e20410e","Type":"ContainerStarted","Data":"277e14de107e4be89ab036184b718c5773f2d3fb8bc58fc2be95d67c06f6d0af"} Feb 17 00:19:55 crc kubenswrapper[5109]: I0217 00:19:55.622017 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-bwlw7" event={"ID":"7fb394af-ebca-4777-8c84-c17bbd1b6fd5","Type":"ContainerStarted","Data":"d71ce172e5344f6e1131d1bee0b7a94225eebd925c6bc35507d8cdf1edb8a0aa"} Feb 17 00:19:55 crc kubenswrapper[5109]: I0217 00:19:55.627019 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" event={"ID":"8c329d85-d26e-437f-82f4-4f8a12e54d44","Type":"ContainerStarted","Data":"1c56f0ff646306983d4423ef77a8e3e7bb2feddc67966286883b2ad14d4ab80c"} Feb 17 00:19:55 crc kubenswrapper[5109]: I0217 00:19:55.627101 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" event={"ID":"8c329d85-d26e-437f-82f4-4f8a12e54d44","Type":"ContainerStarted","Data":"ce7351544ec6b3822eac3565daec28a6f7aaa5eabd38a56e2f381245b7285b3f"} Feb 17 00:19:55 crc kubenswrapper[5109]: I0217 00:19:55.627132 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" event={"ID":"8c329d85-d26e-437f-82f4-4f8a12e54d44","Type":"ContainerStarted","Data":"87937f6e064a57461ee1daa6ce23db098fb289790e47dc238bd45438226c33c9"} Feb 17 00:19:55 crc kubenswrapper[5109]: I0217 00:19:55.627160 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" event={"ID":"8c329d85-d26e-437f-82f4-4f8a12e54d44","Type":"ContainerStarted","Data":"d161883b53ab5dc6f9973ede220b549b41fcade302c092821d2df17ea3754149"} Feb 17 00:19:55 crc kubenswrapper[5109]: I0217 00:19:55.627184 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" event={"ID":"8c329d85-d26e-437f-82f4-4f8a12e54d44","Type":"ContainerStarted","Data":"2e25d6db8e7d30c77f4bd0b34f2b79142df4f819b9d67672e54eb24b0a4e483d"} Feb 17 00:19:55 crc kubenswrapper[5109]: I0217 00:19:55.669171 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-bwlw7" podStartSLOduration=2.66914337 podStartE2EDuration="2.66914337s" podCreationTimestamp="2026-02-17 00:19:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:19:55.659206716 +0000 UTC m=+666.990761474" watchObservedRunningTime="2026-02-17 00:19:55.66914337 +0000 UTC m=+667.000698168" Feb 17 00:19:56 crc kubenswrapper[5109]: I0217 00:19:56.643328 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" event={"ID":"8c329d85-d26e-437f-82f4-4f8a12e54d44","Type":"ContainerStarted","Data":"42dbae2aeb19bb0ef7e426ab9a937e6073b77c09b4b3601b5d4e7a660c8967ca"} Feb 17 00:19:58 crc kubenswrapper[5109]: I0217 00:19:58.664782 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" event={"ID":"8c329d85-d26e-437f-82f4-4f8a12e54d44","Type":"ContainerStarted","Data":"bc8e81414dbc10133443352505477fd1bd25ce4926e3dd171fa2922948fbb98e"} Feb 17 00:20:00 crc kubenswrapper[5109]: I0217 00:20:00.141284 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29521460-x7q4k"] Feb 17 00:20:00 crc kubenswrapper[5109]: I0217 00:20:00.147071 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521460-x7q4k" Feb 17 00:20:00 crc kubenswrapper[5109]: I0217 00:20:00.149406 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 17 00:20:00 crc kubenswrapper[5109]: I0217 00:20:00.150579 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-r4lwp\"" Feb 17 00:20:00 crc kubenswrapper[5109]: I0217 00:20:00.150955 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 17 00:20:00 crc kubenswrapper[5109]: I0217 00:20:00.238548 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j77kc\" (UniqueName: \"kubernetes.io/projected/9e1ab308-fa27-446a-ab9d-98ab9b1ccb89-kube-api-access-j77kc\") pod \"auto-csr-approver-29521460-x7q4k\" (UID: \"9e1ab308-fa27-446a-ab9d-98ab9b1ccb89\") " pod="openshift-infra/auto-csr-approver-29521460-x7q4k" Feb 17 00:20:00 crc kubenswrapper[5109]: I0217 00:20:00.340411 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j77kc\" (UniqueName: \"kubernetes.io/projected/9e1ab308-fa27-446a-ab9d-98ab9b1ccb89-kube-api-access-j77kc\") pod \"auto-csr-approver-29521460-x7q4k\" (UID: \"9e1ab308-fa27-446a-ab9d-98ab9b1ccb89\") " pod="openshift-infra/auto-csr-approver-29521460-x7q4k" Feb 17 00:20:00 crc kubenswrapper[5109]: I0217 00:20:00.361665 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j77kc\" (UniqueName: \"kubernetes.io/projected/9e1ab308-fa27-446a-ab9d-98ab9b1ccb89-kube-api-access-j77kc\") pod \"auto-csr-approver-29521460-x7q4k\" (UID: \"9e1ab308-fa27-446a-ab9d-98ab9b1ccb89\") " pod="openshift-infra/auto-csr-approver-29521460-x7q4k" Feb 17 00:20:00 crc kubenswrapper[5109]: I0217 00:20:00.489174 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521460-x7q4k" Feb 17 00:20:00 crc kubenswrapper[5109]: E0217 00:20:00.530731 5109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29521460-x7q4k_openshift-infra_9e1ab308-fa27-446a-ab9d-98ab9b1ccb89_0(a61801e18a85c257e5d39c8bc8345ab55486cb96c9dfaea8fb41ff8088960ea3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 00:20:00 crc kubenswrapper[5109]: E0217 00:20:00.530843 5109 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29521460-x7q4k_openshift-infra_9e1ab308-fa27-446a-ab9d-98ab9b1ccb89_0(a61801e18a85c257e5d39c8bc8345ab55486cb96c9dfaea8fb41ff8088960ea3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29521460-x7q4k" Feb 17 00:20:00 crc kubenswrapper[5109]: E0217 00:20:00.530929 5109 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29521460-x7q4k_openshift-infra_9e1ab308-fa27-446a-ab9d-98ab9b1ccb89_0(a61801e18a85c257e5d39c8bc8345ab55486cb96c9dfaea8fb41ff8088960ea3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29521460-x7q4k" Feb 17 00:20:00 crc kubenswrapper[5109]: E0217 00:20:00.531037 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"auto-csr-approver-29521460-x7q4k_openshift-infra(9e1ab308-fa27-446a-ab9d-98ab9b1ccb89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"auto-csr-approver-29521460-x7q4k_openshift-infra(9e1ab308-fa27-446a-ab9d-98ab9b1ccb89)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29521460-x7q4k_openshift-infra_9e1ab308-fa27-446a-ab9d-98ab9b1ccb89_0(a61801e18a85c257e5d39c8bc8345ab55486cb96c9dfaea8fb41ff8088960ea3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-infra/auto-csr-approver-29521460-x7q4k" podUID="9e1ab308-fa27-446a-ab9d-98ab9b1ccb89" Feb 17 00:20:00 crc kubenswrapper[5109]: I0217 00:20:00.688340 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" event={"ID":"8c329d85-d26e-437f-82f4-4f8a12e54d44","Type":"ContainerStarted","Data":"12a4d99c1429ae6053e6a2dd1234107d5dcfb448add16cc6183b185416252182"} Feb 17 00:20:00 crc kubenswrapper[5109]: I0217 00:20:00.737966 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" podStartSLOduration=7.73795009 podStartE2EDuration="7.73795009s" podCreationTimestamp="2026-02-17 00:19:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:20:00.736225656 +0000 UTC m=+672.067780494" watchObservedRunningTime="2026-02-17 00:20:00.73795009 +0000 UTC m=+672.069504848" Feb 17 00:20:00 crc kubenswrapper[5109]: I0217 00:20:00.800377 5109 patch_prober.go:28] interesting pod/machine-config-daemon-hjvm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:20:00 crc kubenswrapper[5109]: I0217 00:20:00.800468 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" podUID="5867f26a-eddd-4d0b-bfa3-e7c68e976330" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:20:01 crc kubenswrapper[5109]: I0217 00:20:01.618841 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29521460-x7q4k"] Feb 17 00:20:01 crc kubenswrapper[5109]: I0217 00:20:01.618940 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521460-x7q4k" Feb 17 00:20:01 crc kubenswrapper[5109]: I0217 00:20:01.619174 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521460-x7q4k" Feb 17 00:20:01 crc kubenswrapper[5109]: E0217 00:20:01.642493 5109 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29521460-x7q4k_openshift-infra_9e1ab308-fa27-446a-ab9d-98ab9b1ccb89_0(27c17a611eb6750abcb3d2f3d4588825ac71ce26acae46fbfb8ce694a68f8727): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 00:20:01 crc kubenswrapper[5109]: E0217 00:20:01.642567 5109 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29521460-x7q4k_openshift-infra_9e1ab308-fa27-446a-ab9d-98ab9b1ccb89_0(27c17a611eb6750abcb3d2f3d4588825ac71ce26acae46fbfb8ce694a68f8727): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29521460-x7q4k" Feb 17 00:20:01 crc kubenswrapper[5109]: E0217 00:20:01.642611 5109 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29521460-x7q4k_openshift-infra_9e1ab308-fa27-446a-ab9d-98ab9b1ccb89_0(27c17a611eb6750abcb3d2f3d4588825ac71ce26acae46fbfb8ce694a68f8727): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-infra/auto-csr-approver-29521460-x7q4k" Feb 17 00:20:01 crc kubenswrapper[5109]: E0217 00:20:01.642687 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"auto-csr-approver-29521460-x7q4k_openshift-infra(9e1ab308-fa27-446a-ab9d-98ab9b1ccb89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"auto-csr-approver-29521460-x7q4k_openshift-infra(9e1ab308-fa27-446a-ab9d-98ab9b1ccb89)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29521460-x7q4k_openshift-infra_9e1ab308-fa27-446a-ab9d-98ab9b1ccb89_0(27c17a611eb6750abcb3d2f3d4588825ac71ce26acae46fbfb8ce694a68f8727): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-infra/auto-csr-approver-29521460-x7q4k" podUID="9e1ab308-fa27-446a-ab9d-98ab9b1ccb89" Feb 17 00:20:01 crc kubenswrapper[5109]: I0217 00:20:01.694947 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:20:01 crc kubenswrapper[5109]: I0217 00:20:01.695030 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:20:01 crc kubenswrapper[5109]: I0217 00:20:01.695056 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:20:01 crc kubenswrapper[5109]: I0217 00:20:01.723421 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:20:01 crc kubenswrapper[5109]: I0217 00:20:01.731082 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:20:15 crc kubenswrapper[5109]: I0217 00:20:15.463736 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521460-x7q4k" Feb 17 00:20:15 crc kubenswrapper[5109]: I0217 00:20:15.465855 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521460-x7q4k" Feb 17 00:20:15 crc kubenswrapper[5109]: I0217 00:20:15.689884 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29521460-x7q4k"] Feb 17 00:20:15 crc kubenswrapper[5109]: W0217 00:20:15.695150 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e1ab308_fa27_446a_ab9d_98ab9b1ccb89.slice/crio-66394af95e7c86e31853f8c5d6fd96f11608f533d0566a9846cc003679e104d9 WatchSource:0}: Error finding container 66394af95e7c86e31853f8c5d6fd96f11608f533d0566a9846cc003679e104d9: Status 404 returned error can't find the container with id 66394af95e7c86e31853f8c5d6fd96f11608f533d0566a9846cc003679e104d9 Feb 17 00:20:15 crc kubenswrapper[5109]: I0217 00:20:15.774646 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29521460-x7q4k" event={"ID":"9e1ab308-fa27-446a-ab9d-98ab9b1ccb89","Type":"ContainerStarted","Data":"66394af95e7c86e31853f8c5d6fd96f11608f533d0566a9846cc003679e104d9"} Feb 17 00:20:17 crc kubenswrapper[5109]: I0217 00:20:17.792125 5109 generic.go:358] "Generic (PLEG): container finished" podID="9e1ab308-fa27-446a-ab9d-98ab9b1ccb89" containerID="60064ce0f5c654fd28d5ecef8237e8187cd9603b608997f24e4ea2d831747923" exitCode=0 Feb 17 00:20:17 crc kubenswrapper[5109]: I0217 00:20:17.792215 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29521460-x7q4k" event={"ID":"9e1ab308-fa27-446a-ab9d-98ab9b1ccb89","Type":"ContainerDied","Data":"60064ce0f5c654fd28d5ecef8237e8187cd9603b608997f24e4ea2d831747923"} Feb 17 00:20:19 crc kubenswrapper[5109]: I0217 00:20:19.036397 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521460-x7q4k" Feb 17 00:20:19 crc kubenswrapper[5109]: I0217 00:20:19.157200 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j77kc\" (UniqueName: \"kubernetes.io/projected/9e1ab308-fa27-446a-ab9d-98ab9b1ccb89-kube-api-access-j77kc\") pod \"9e1ab308-fa27-446a-ab9d-98ab9b1ccb89\" (UID: \"9e1ab308-fa27-446a-ab9d-98ab9b1ccb89\") " Feb 17 00:20:19 crc kubenswrapper[5109]: I0217 00:20:19.164884 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e1ab308-fa27-446a-ab9d-98ab9b1ccb89-kube-api-access-j77kc" (OuterVolumeSpecName: "kube-api-access-j77kc") pod "9e1ab308-fa27-446a-ab9d-98ab9b1ccb89" (UID: "9e1ab308-fa27-446a-ab9d-98ab9b1ccb89"). InnerVolumeSpecName "kube-api-access-j77kc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:20:19 crc kubenswrapper[5109]: I0217 00:20:19.258731 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j77kc\" (UniqueName: \"kubernetes.io/projected/9e1ab308-fa27-446a-ab9d-98ab9b1ccb89-kube-api-access-j77kc\") on node \"crc\" DevicePath \"\"" Feb 17 00:20:19 crc kubenswrapper[5109]: I0217 00:20:19.809868 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29521460-x7q4k" event={"ID":"9e1ab308-fa27-446a-ab9d-98ab9b1ccb89","Type":"ContainerDied","Data":"66394af95e7c86e31853f8c5d6fd96f11608f533d0566a9846cc003679e104d9"} Feb 17 00:20:19 crc kubenswrapper[5109]: I0217 00:20:19.810249 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66394af95e7c86e31853f8c5d6fd96f11608f533d0566a9846cc003679e104d9" Feb 17 00:20:19 crc kubenswrapper[5109]: I0217 00:20:19.809928 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521460-x7q4k" Feb 17 00:20:30 crc kubenswrapper[5109]: I0217 00:20:30.800530 5109 patch_prober.go:28] interesting pod/machine-config-daemon-hjvm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:20:30 crc kubenswrapper[5109]: I0217 00:20:30.801262 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" podUID="5867f26a-eddd-4d0b-bfa3-e7c68e976330" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:20:33 crc kubenswrapper[5109]: I0217 00:20:33.734704 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rv26r" Feb 17 00:20:51 crc kubenswrapper[5109]: I0217 00:20:51.051959 5109 scope.go:117] "RemoveContainer" containerID="ea1ad12a61b9c4366fe0473f416a58533b1cca24e29cccc74cc9cc79de87cc1d" Feb 17 00:20:51 crc kubenswrapper[5109]: I0217 00:20:51.066815 5109 scope.go:117] "RemoveContainer" containerID="14033009baab184f3748e60e4924fcc69138f6f45662537d53ff835ad32aa323" Feb 17 00:20:57 crc kubenswrapper[5109]: I0217 00:20:57.492487 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tlkct"] Feb 17 00:20:57 crc kubenswrapper[5109]: I0217 00:20:57.493749 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tlkct" podUID="a6a5d27d-bc23-4df1-86a7-b44323215f2f" containerName="registry-server" containerID="cri-o://c6795a232a7baa19a7e8fb13f508ee8bea6438b001d4c4747347b67a1edb0c8c" gracePeriod=30 Feb 17 00:20:57 crc kubenswrapper[5109]: I0217 00:20:57.816689 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tlkct" Feb 17 00:20:57 crc kubenswrapper[5109]: I0217 00:20:57.947904 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6a5d27d-bc23-4df1-86a7-b44323215f2f-utilities\") pod \"a6a5d27d-bc23-4df1-86a7-b44323215f2f\" (UID: \"a6a5d27d-bc23-4df1-86a7-b44323215f2f\") " Feb 17 00:20:57 crc kubenswrapper[5109]: I0217 00:20:57.947974 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km64g\" (UniqueName: \"kubernetes.io/projected/a6a5d27d-bc23-4df1-86a7-b44323215f2f-kube-api-access-km64g\") pod \"a6a5d27d-bc23-4df1-86a7-b44323215f2f\" (UID: \"a6a5d27d-bc23-4df1-86a7-b44323215f2f\") " Feb 17 00:20:57 crc kubenswrapper[5109]: I0217 00:20:57.948032 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6a5d27d-bc23-4df1-86a7-b44323215f2f-catalog-content\") pod \"a6a5d27d-bc23-4df1-86a7-b44323215f2f\" (UID: \"a6a5d27d-bc23-4df1-86a7-b44323215f2f\") " Feb 17 00:20:57 crc kubenswrapper[5109]: I0217 00:20:57.950209 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6a5d27d-bc23-4df1-86a7-b44323215f2f-utilities" (OuterVolumeSpecName: "utilities") pod "a6a5d27d-bc23-4df1-86a7-b44323215f2f" (UID: "a6a5d27d-bc23-4df1-86a7-b44323215f2f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:20:57 crc kubenswrapper[5109]: I0217 00:20:57.955999 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6a5d27d-bc23-4df1-86a7-b44323215f2f-kube-api-access-km64g" (OuterVolumeSpecName: "kube-api-access-km64g") pod "a6a5d27d-bc23-4df1-86a7-b44323215f2f" (UID: "a6a5d27d-bc23-4df1-86a7-b44323215f2f"). InnerVolumeSpecName "kube-api-access-km64g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:20:57 crc kubenswrapper[5109]: I0217 00:20:57.966717 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6a5d27d-bc23-4df1-86a7-b44323215f2f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6a5d27d-bc23-4df1-86a7-b44323215f2f" (UID: "a6a5d27d-bc23-4df1-86a7-b44323215f2f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:20:58 crc kubenswrapper[5109]: I0217 00:20:58.049008 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6a5d27d-bc23-4df1-86a7-b44323215f2f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:20:58 crc kubenswrapper[5109]: I0217 00:20:58.049046 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6a5d27d-bc23-4df1-86a7-b44323215f2f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:20:58 crc kubenswrapper[5109]: I0217 00:20:58.049059 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-km64g\" (UniqueName: \"kubernetes.io/projected/a6a5d27d-bc23-4df1-86a7-b44323215f2f-kube-api-access-km64g\") on node \"crc\" DevicePath \"\"" Feb 17 00:20:58 crc kubenswrapper[5109]: I0217 00:20:58.077462 5109 generic.go:358] "Generic (PLEG): container finished" podID="a6a5d27d-bc23-4df1-86a7-b44323215f2f" containerID="c6795a232a7baa19a7e8fb13f508ee8bea6438b001d4c4747347b67a1edb0c8c" exitCode=0 Feb 17 00:20:58 crc kubenswrapper[5109]: I0217 00:20:58.077542 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlkct" event={"ID":"a6a5d27d-bc23-4df1-86a7-b44323215f2f","Type":"ContainerDied","Data":"c6795a232a7baa19a7e8fb13f508ee8bea6438b001d4c4747347b67a1edb0c8c"} Feb 17 00:20:58 crc kubenswrapper[5109]: I0217 00:20:58.077576 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlkct" event={"ID":"a6a5d27d-bc23-4df1-86a7-b44323215f2f","Type":"ContainerDied","Data":"24adba95caf48f56334dbd12649cd4223f474a7440075ecc07c0d74586514c54"} Feb 17 00:20:58 crc kubenswrapper[5109]: I0217 00:20:58.077649 5109 scope.go:117] "RemoveContainer" containerID="c6795a232a7baa19a7e8fb13f508ee8bea6438b001d4c4747347b67a1edb0c8c" Feb 17 00:20:58 crc kubenswrapper[5109]: I0217 00:20:58.077697 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tlkct" Feb 17 00:20:58 crc kubenswrapper[5109]: I0217 00:20:58.106723 5109 scope.go:117] "RemoveContainer" containerID="62e3526f93f183b2188735c3a9834ee5df6ea156deedb61adb49dca78c49b807" Feb 17 00:20:58 crc kubenswrapper[5109]: I0217 00:20:58.123076 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tlkct"] Feb 17 00:20:58 crc kubenswrapper[5109]: I0217 00:20:58.126987 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tlkct"] Feb 17 00:20:58 crc kubenswrapper[5109]: I0217 00:20:58.137856 5109 scope.go:117] "RemoveContainer" containerID="add5ee43b9517c8d270eb8fc46c948d5acfb206a58200b5548580df09ab00eb2" Feb 17 00:20:58 crc kubenswrapper[5109]: I0217 00:20:58.163954 5109 scope.go:117] "RemoveContainer" containerID="c6795a232a7baa19a7e8fb13f508ee8bea6438b001d4c4747347b67a1edb0c8c" Feb 17 00:20:58 crc kubenswrapper[5109]: E0217 00:20:58.164485 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6795a232a7baa19a7e8fb13f508ee8bea6438b001d4c4747347b67a1edb0c8c\": container with ID starting with c6795a232a7baa19a7e8fb13f508ee8bea6438b001d4c4747347b67a1edb0c8c not found: ID does not exist" containerID="c6795a232a7baa19a7e8fb13f508ee8bea6438b001d4c4747347b67a1edb0c8c" Feb 17 00:20:58 crc kubenswrapper[5109]: I0217 00:20:58.164533 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6795a232a7baa19a7e8fb13f508ee8bea6438b001d4c4747347b67a1edb0c8c"} err="failed to get container status \"c6795a232a7baa19a7e8fb13f508ee8bea6438b001d4c4747347b67a1edb0c8c\": rpc error: code = NotFound desc = could not find container \"c6795a232a7baa19a7e8fb13f508ee8bea6438b001d4c4747347b67a1edb0c8c\": container with ID starting with c6795a232a7baa19a7e8fb13f508ee8bea6438b001d4c4747347b67a1edb0c8c not found: ID does not exist" Feb 17 00:20:58 crc kubenswrapper[5109]: I0217 00:20:58.164567 5109 scope.go:117] "RemoveContainer" containerID="62e3526f93f183b2188735c3a9834ee5df6ea156deedb61adb49dca78c49b807" Feb 17 00:20:58 crc kubenswrapper[5109]: E0217 00:20:58.165120 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62e3526f93f183b2188735c3a9834ee5df6ea156deedb61adb49dca78c49b807\": container with ID starting with 62e3526f93f183b2188735c3a9834ee5df6ea156deedb61adb49dca78c49b807 not found: ID does not exist" containerID="62e3526f93f183b2188735c3a9834ee5df6ea156deedb61adb49dca78c49b807" Feb 17 00:20:58 crc kubenswrapper[5109]: I0217 00:20:58.165191 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62e3526f93f183b2188735c3a9834ee5df6ea156deedb61adb49dca78c49b807"} err="failed to get container status \"62e3526f93f183b2188735c3a9834ee5df6ea156deedb61adb49dca78c49b807\": rpc error: code = NotFound desc = could not find container \"62e3526f93f183b2188735c3a9834ee5df6ea156deedb61adb49dca78c49b807\": container with ID starting with 62e3526f93f183b2188735c3a9834ee5df6ea156deedb61adb49dca78c49b807 not found: ID does not exist" Feb 17 00:20:58 crc kubenswrapper[5109]: I0217 00:20:58.165230 5109 scope.go:117] "RemoveContainer" containerID="add5ee43b9517c8d270eb8fc46c948d5acfb206a58200b5548580df09ab00eb2" Feb 17 00:20:58 crc kubenswrapper[5109]: E0217 00:20:58.165649 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"add5ee43b9517c8d270eb8fc46c948d5acfb206a58200b5548580df09ab00eb2\": container with ID starting with add5ee43b9517c8d270eb8fc46c948d5acfb206a58200b5548580df09ab00eb2 not found: ID does not exist" containerID="add5ee43b9517c8d270eb8fc46c948d5acfb206a58200b5548580df09ab00eb2" Feb 17 00:20:58 crc kubenswrapper[5109]: I0217 00:20:58.165687 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"add5ee43b9517c8d270eb8fc46c948d5acfb206a58200b5548580df09ab00eb2"} err="failed to get container status \"add5ee43b9517c8d270eb8fc46c948d5acfb206a58200b5548580df09ab00eb2\": rpc error: code = NotFound desc = could not find container \"add5ee43b9517c8d270eb8fc46c948d5acfb206a58200b5548580df09ab00eb2\": container with ID starting with add5ee43b9517c8d270eb8fc46c948d5acfb206a58200b5548580df09ab00eb2 not found: ID does not exist" Feb 17 00:20:59 crc kubenswrapper[5109]: I0217 00:20:59.477118 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6a5d27d-bc23-4df1-86a7-b44323215f2f" path="/var/lib/kubelet/pods/a6a5d27d-bc23-4df1-86a7-b44323215f2f/volumes" Feb 17 00:21:00 crc kubenswrapper[5109]: I0217 00:21:00.774076 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb"] Feb 17 00:21:00 crc kubenswrapper[5109]: I0217 00:21:00.775081 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a6a5d27d-bc23-4df1-86a7-b44323215f2f" containerName="registry-server" Feb 17 00:21:00 crc kubenswrapper[5109]: I0217 00:21:00.775106 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6a5d27d-bc23-4df1-86a7-b44323215f2f" containerName="registry-server" Feb 17 00:21:00 crc kubenswrapper[5109]: I0217 00:21:00.775136 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a6a5d27d-bc23-4df1-86a7-b44323215f2f" containerName="extract-content" Feb 17 00:21:00 crc kubenswrapper[5109]: I0217 00:21:00.775149 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6a5d27d-bc23-4df1-86a7-b44323215f2f" containerName="extract-content" Feb 17 00:21:00 crc kubenswrapper[5109]: I0217 00:21:00.775171 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9e1ab308-fa27-446a-ab9d-98ab9b1ccb89" containerName="oc" Feb 17 00:21:00 crc kubenswrapper[5109]: I0217 00:21:00.775185 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e1ab308-fa27-446a-ab9d-98ab9b1ccb89" containerName="oc" Feb 17 00:21:00 crc kubenswrapper[5109]: I0217 00:21:00.775209 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a6a5d27d-bc23-4df1-86a7-b44323215f2f" containerName="extract-utilities" Feb 17 00:21:00 crc kubenswrapper[5109]: I0217 00:21:00.775222 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6a5d27d-bc23-4df1-86a7-b44323215f2f" containerName="extract-utilities" Feb 17 00:21:00 crc kubenswrapper[5109]: I0217 00:21:00.775387 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="9e1ab308-fa27-446a-ab9d-98ab9b1ccb89" containerName="oc" Feb 17 00:21:00 crc kubenswrapper[5109]: I0217 00:21:00.775411 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="a6a5d27d-bc23-4df1-86a7-b44323215f2f" containerName="registry-server" Feb 17 00:21:00 crc kubenswrapper[5109]: I0217 00:21:00.793936 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb"] Feb 17 00:21:00 crc kubenswrapper[5109]: I0217 00:21:00.794151 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb" Feb 17 00:21:00 crc kubenswrapper[5109]: I0217 00:21:00.796258 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Feb 17 00:21:00 crc kubenswrapper[5109]: I0217 00:21:00.799869 5109 patch_prober.go:28] interesting pod/machine-config-daemon-hjvm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:21:00 crc kubenswrapper[5109]: I0217 00:21:00.799930 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" podUID="5867f26a-eddd-4d0b-bfa3-e7c68e976330" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:21:00 crc kubenswrapper[5109]: I0217 00:21:00.799978 5109 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" Feb 17 00:21:00 crc kubenswrapper[5109]: I0217 00:21:00.800507 5109 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4240edc1c6cdc8427405aa2a8b83638ea6ac630ece6a4d9c0ad1bed15963e71f"} pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 00:21:00 crc kubenswrapper[5109]: I0217 00:21:00.800580 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" podUID="5867f26a-eddd-4d0b-bfa3-e7c68e976330" containerName="machine-config-daemon" containerID="cri-o://4240edc1c6cdc8427405aa2a8b83638ea6ac630ece6a4d9c0ad1bed15963e71f" gracePeriod=600 Feb 17 00:21:00 crc kubenswrapper[5109]: I0217 00:21:00.883018 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/564af301-6673-4e7b-8882-b923a9df0634-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb\" (UID: \"564af301-6673-4e7b-8882-b923a9df0634\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb" Feb 17 00:21:00 crc kubenswrapper[5109]: I0217 00:21:00.883075 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/564af301-6673-4e7b-8882-b923a9df0634-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb\" (UID: \"564af301-6673-4e7b-8882-b923a9df0634\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb" Feb 17 00:21:00 crc kubenswrapper[5109]: I0217 00:21:00.883171 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84ps5\" (UniqueName: \"kubernetes.io/projected/564af301-6673-4e7b-8882-b923a9df0634-kube-api-access-84ps5\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb\" (UID: \"564af301-6673-4e7b-8882-b923a9df0634\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb" Feb 17 00:21:00 crc kubenswrapper[5109]: I0217 00:21:00.986671 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/564af301-6673-4e7b-8882-b923a9df0634-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb\" (UID: \"564af301-6673-4e7b-8882-b923a9df0634\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb" Feb 17 00:21:00 crc kubenswrapper[5109]: I0217 00:21:00.986800 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/564af301-6673-4e7b-8882-b923a9df0634-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb\" (UID: \"564af301-6673-4e7b-8882-b923a9df0634\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb" Feb 17 00:21:00 crc kubenswrapper[5109]: I0217 00:21:00.986986 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-84ps5\" (UniqueName: \"kubernetes.io/projected/564af301-6673-4e7b-8882-b923a9df0634-kube-api-access-84ps5\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb\" (UID: \"564af301-6673-4e7b-8882-b923a9df0634\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb" Feb 17 00:21:00 crc kubenswrapper[5109]: I0217 00:21:00.987231 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/564af301-6673-4e7b-8882-b923a9df0634-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb\" (UID: \"564af301-6673-4e7b-8882-b923a9df0634\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb" Feb 17 00:21:00 crc kubenswrapper[5109]: I0217 00:21:00.987641 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/564af301-6673-4e7b-8882-b923a9df0634-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb\" (UID: \"564af301-6673-4e7b-8882-b923a9df0634\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb" Feb 17 00:21:01 crc kubenswrapper[5109]: I0217 00:21:01.022673 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-84ps5\" (UniqueName: \"kubernetes.io/projected/564af301-6673-4e7b-8882-b923a9df0634-kube-api-access-84ps5\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb\" (UID: \"564af301-6673-4e7b-8882-b923a9df0634\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb" Feb 17 00:21:01 crc kubenswrapper[5109]: I0217 00:21:01.098055 5109 generic.go:358] "Generic (PLEG): container finished" podID="5867f26a-eddd-4d0b-bfa3-e7c68e976330" containerID="4240edc1c6cdc8427405aa2a8b83638ea6ac630ece6a4d9c0ad1bed15963e71f" exitCode=0 Feb 17 00:21:01 crc kubenswrapper[5109]: I0217 00:21:01.098245 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" event={"ID":"5867f26a-eddd-4d0b-bfa3-e7c68e976330","Type":"ContainerDied","Data":"4240edc1c6cdc8427405aa2a8b83638ea6ac630ece6a4d9c0ad1bed15963e71f"} Feb 17 00:21:01 crc kubenswrapper[5109]: I0217 00:21:01.098295 5109 scope.go:117] "RemoveContainer" containerID="ca47888da47c26cab83fcf442e46e7728fd6fee6d192dec983630c8c66aeda36" Feb 17 00:21:01 crc kubenswrapper[5109]: I0217 00:21:01.113071 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb" Feb 17 00:21:01 crc kubenswrapper[5109]: I0217 00:21:01.377038 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb"] Feb 17 00:21:01 crc kubenswrapper[5109]: W0217 00:21:01.391990 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod564af301_6673_4e7b_8882_b923a9df0634.slice/crio-09f8376f3c2d64e36539b34d46a9b7ac6228721477890624f4847fd25a597710 WatchSource:0}: Error finding container 09f8376f3c2d64e36539b34d46a9b7ac6228721477890624f4847fd25a597710: Status 404 returned error can't find the container with id 09f8376f3c2d64e36539b34d46a9b7ac6228721477890624f4847fd25a597710 Feb 17 00:21:02 crc kubenswrapper[5109]: I0217 00:21:02.108351 5109 generic.go:358] "Generic (PLEG): container finished" podID="564af301-6673-4e7b-8882-b923a9df0634" containerID="cdbc9910e548ca0035002595754857ae091704800ec3e596342cbb4d34c06dc2" exitCode=0 Feb 17 00:21:02 crc kubenswrapper[5109]: I0217 00:21:02.108444 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb" event={"ID":"564af301-6673-4e7b-8882-b923a9df0634","Type":"ContainerDied","Data":"cdbc9910e548ca0035002595754857ae091704800ec3e596342cbb4d34c06dc2"} Feb 17 00:21:02 crc kubenswrapper[5109]: I0217 00:21:02.108993 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb" event={"ID":"564af301-6673-4e7b-8882-b923a9df0634","Type":"ContainerStarted","Data":"09f8376f3c2d64e36539b34d46a9b7ac6228721477890624f4847fd25a597710"} Feb 17 00:21:02 crc kubenswrapper[5109]: I0217 00:21:02.111268 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" event={"ID":"5867f26a-eddd-4d0b-bfa3-e7c68e976330","Type":"ContainerStarted","Data":"d5d7ec8c550e7e2cfc407a940fdcc36fdc2c2f34ba89176aa04f58fa822b9c35"} Feb 17 00:21:03 crc kubenswrapper[5109]: I0217 00:21:03.719216 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-flxwz"] Feb 17 00:21:03 crc kubenswrapper[5109]: I0217 00:21:03.729134 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-flxwz" Feb 17 00:21:03 crc kubenswrapper[5109]: I0217 00:21:03.733322 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-flxwz"] Feb 17 00:21:03 crc kubenswrapper[5109]: I0217 00:21:03.826784 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c55a14c8-0f4e-4283-bc15-94334c567101-utilities\") pod \"redhat-operators-flxwz\" (UID: \"c55a14c8-0f4e-4283-bc15-94334c567101\") " pod="openshift-marketplace/redhat-operators-flxwz" Feb 17 00:21:03 crc kubenswrapper[5109]: I0217 00:21:03.826980 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c55a14c8-0f4e-4283-bc15-94334c567101-catalog-content\") pod \"redhat-operators-flxwz\" (UID: \"c55a14c8-0f4e-4283-bc15-94334c567101\") " pod="openshift-marketplace/redhat-operators-flxwz" Feb 17 00:21:03 crc kubenswrapper[5109]: I0217 00:21:03.827026 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg2sf\" (UniqueName: \"kubernetes.io/projected/c55a14c8-0f4e-4283-bc15-94334c567101-kube-api-access-wg2sf\") pod \"redhat-operators-flxwz\" (UID: \"c55a14c8-0f4e-4283-bc15-94334c567101\") " pod="openshift-marketplace/redhat-operators-flxwz" Feb 17 00:21:03 crc kubenswrapper[5109]: I0217 00:21:03.928800 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c55a14c8-0f4e-4283-bc15-94334c567101-catalog-content\") pod \"redhat-operators-flxwz\" (UID: \"c55a14c8-0f4e-4283-bc15-94334c567101\") " pod="openshift-marketplace/redhat-operators-flxwz" Feb 17 00:21:03 crc kubenswrapper[5109]: I0217 00:21:03.928855 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wg2sf\" (UniqueName: \"kubernetes.io/projected/c55a14c8-0f4e-4283-bc15-94334c567101-kube-api-access-wg2sf\") pod \"redhat-operators-flxwz\" (UID: \"c55a14c8-0f4e-4283-bc15-94334c567101\") " pod="openshift-marketplace/redhat-operators-flxwz" Feb 17 00:21:03 crc kubenswrapper[5109]: I0217 00:21:03.928948 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c55a14c8-0f4e-4283-bc15-94334c567101-utilities\") pod \"redhat-operators-flxwz\" (UID: \"c55a14c8-0f4e-4283-bc15-94334c567101\") " pod="openshift-marketplace/redhat-operators-flxwz" Feb 17 00:21:03 crc kubenswrapper[5109]: I0217 00:21:03.929612 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c55a14c8-0f4e-4283-bc15-94334c567101-catalog-content\") pod \"redhat-operators-flxwz\" (UID: \"c55a14c8-0f4e-4283-bc15-94334c567101\") " pod="openshift-marketplace/redhat-operators-flxwz" Feb 17 00:21:03 crc kubenswrapper[5109]: I0217 00:21:03.929683 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c55a14c8-0f4e-4283-bc15-94334c567101-utilities\") pod \"redhat-operators-flxwz\" (UID: \"c55a14c8-0f4e-4283-bc15-94334c567101\") " pod="openshift-marketplace/redhat-operators-flxwz" Feb 17 00:21:03 crc kubenswrapper[5109]: I0217 00:21:03.970722 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg2sf\" (UniqueName: \"kubernetes.io/projected/c55a14c8-0f4e-4283-bc15-94334c567101-kube-api-access-wg2sf\") pod \"redhat-operators-flxwz\" (UID: \"c55a14c8-0f4e-4283-bc15-94334c567101\") " pod="openshift-marketplace/redhat-operators-flxwz" Feb 17 00:21:04 crc kubenswrapper[5109]: I0217 00:21:04.092112 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-flxwz" Feb 17 00:21:04 crc kubenswrapper[5109]: I0217 00:21:04.123160 5109 generic.go:358] "Generic (PLEG): container finished" podID="564af301-6673-4e7b-8882-b923a9df0634" containerID="449d17b48374480c29eb89229166e47c1271e9ca23a0582fcf3f369973997773" exitCode=0 Feb 17 00:21:04 crc kubenswrapper[5109]: I0217 00:21:04.123191 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb" event={"ID":"564af301-6673-4e7b-8882-b923a9df0634","Type":"ContainerDied","Data":"449d17b48374480c29eb89229166e47c1271e9ca23a0582fcf3f369973997773"} Feb 17 00:21:04 crc kubenswrapper[5109]: I0217 00:21:04.508652 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-flxwz"] Feb 17 00:21:04 crc kubenswrapper[5109]: W0217 00:21:04.518437 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc55a14c8_0f4e_4283_bc15_94334c567101.slice/crio-c3b7a2501e2d95bf3df72d5fe0eeaca9664205891419a80fe6338bd54c6742a2 WatchSource:0}: Error finding container c3b7a2501e2d95bf3df72d5fe0eeaca9664205891419a80fe6338bd54c6742a2: Status 404 returned error can't find the container with id c3b7a2501e2d95bf3df72d5fe0eeaca9664205891419a80fe6338bd54c6742a2 Feb 17 00:21:05 crc kubenswrapper[5109]: I0217 00:21:05.133858 5109 generic.go:358] "Generic (PLEG): container finished" podID="564af301-6673-4e7b-8882-b923a9df0634" containerID="2a919a38bec8c5ba74ca19079576b14edd2a675a2299ca9d1862dbd37d958400" exitCode=0 Feb 17 00:21:05 crc kubenswrapper[5109]: I0217 00:21:05.133922 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb" event={"ID":"564af301-6673-4e7b-8882-b923a9df0634","Type":"ContainerDied","Data":"2a919a38bec8c5ba74ca19079576b14edd2a675a2299ca9d1862dbd37d958400"} Feb 17 00:21:05 crc kubenswrapper[5109]: I0217 00:21:05.136152 5109 generic.go:358] "Generic (PLEG): container finished" podID="c55a14c8-0f4e-4283-bc15-94334c567101" containerID="8cac2e93957a1683eeec43e66d125fbd5c2584dc7a5c1e86131fda887fd657cd" exitCode=0 Feb 17 00:21:05 crc kubenswrapper[5109]: I0217 00:21:05.136239 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flxwz" event={"ID":"c55a14c8-0f4e-4283-bc15-94334c567101","Type":"ContainerDied","Data":"8cac2e93957a1683eeec43e66d125fbd5c2584dc7a5c1e86131fda887fd657cd"} Feb 17 00:21:05 crc kubenswrapper[5109]: I0217 00:21:05.136277 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flxwz" event={"ID":"c55a14c8-0f4e-4283-bc15-94334c567101","Type":"ContainerStarted","Data":"c3b7a2501e2d95bf3df72d5fe0eeaca9664205891419a80fe6338bd54c6742a2"} Feb 17 00:21:06 crc kubenswrapper[5109]: I0217 00:21:06.434029 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb" Feb 17 00:21:06 crc kubenswrapper[5109]: I0217 00:21:06.563036 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/564af301-6673-4e7b-8882-b923a9df0634-bundle\") pod \"564af301-6673-4e7b-8882-b923a9df0634\" (UID: \"564af301-6673-4e7b-8882-b923a9df0634\") " Feb 17 00:21:06 crc kubenswrapper[5109]: I0217 00:21:06.563193 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84ps5\" (UniqueName: \"kubernetes.io/projected/564af301-6673-4e7b-8882-b923a9df0634-kube-api-access-84ps5\") pod \"564af301-6673-4e7b-8882-b923a9df0634\" (UID: \"564af301-6673-4e7b-8882-b923a9df0634\") " Feb 17 00:21:06 crc kubenswrapper[5109]: I0217 00:21:06.563263 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/564af301-6673-4e7b-8882-b923a9df0634-util\") pod \"564af301-6673-4e7b-8882-b923a9df0634\" (UID: \"564af301-6673-4e7b-8882-b923a9df0634\") " Feb 17 00:21:06 crc kubenswrapper[5109]: I0217 00:21:06.566200 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/564af301-6673-4e7b-8882-b923a9df0634-bundle" (OuterVolumeSpecName: "bundle") pod "564af301-6673-4e7b-8882-b923a9df0634" (UID: "564af301-6673-4e7b-8882-b923a9df0634"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:21:06 crc kubenswrapper[5109]: I0217 00:21:06.572827 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/564af301-6673-4e7b-8882-b923a9df0634-kube-api-access-84ps5" (OuterVolumeSpecName: "kube-api-access-84ps5") pod "564af301-6673-4e7b-8882-b923a9df0634" (UID: "564af301-6673-4e7b-8882-b923a9df0634"). InnerVolumeSpecName "kube-api-access-84ps5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:21:06 crc kubenswrapper[5109]: I0217 00:21:06.579810 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/564af301-6673-4e7b-8882-b923a9df0634-util" (OuterVolumeSpecName: "util") pod "564af301-6673-4e7b-8882-b923a9df0634" (UID: "564af301-6673-4e7b-8882-b923a9df0634"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:21:06 crc kubenswrapper[5109]: I0217 00:21:06.665939 5109 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/564af301-6673-4e7b-8882-b923a9df0634-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:21:06 crc kubenswrapper[5109]: I0217 00:21:06.666006 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-84ps5\" (UniqueName: \"kubernetes.io/projected/564af301-6673-4e7b-8882-b923a9df0634-kube-api-access-84ps5\") on node \"crc\" DevicePath \"\"" Feb 17 00:21:06 crc kubenswrapper[5109]: I0217 00:21:06.666032 5109 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/564af301-6673-4e7b-8882-b923a9df0634-util\") on node \"crc\" DevicePath \"\"" Feb 17 00:21:07 crc kubenswrapper[5109]: I0217 00:21:07.153585 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb" event={"ID":"564af301-6673-4e7b-8882-b923a9df0634","Type":"ContainerDied","Data":"09f8376f3c2d64e36539b34d46a9b7ac6228721477890624f4847fd25a597710"} Feb 17 00:21:07 crc kubenswrapper[5109]: I0217 00:21:07.153641 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09f8376f3c2d64e36539b34d46a9b7ac6228721477890624f4847fd25a597710" Feb 17 00:21:07 crc kubenswrapper[5109]: I0217 00:21:07.153585 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb" Feb 17 00:21:07 crc kubenswrapper[5109]: I0217 00:21:07.156342 5109 generic.go:358] "Generic (PLEG): container finished" podID="c55a14c8-0f4e-4283-bc15-94334c567101" containerID="09e7c4ed7e5c1df18c06a0807859b1f49762884c0cd22e33f987ca80b0b67dec" exitCode=0 Feb 17 00:21:07 crc kubenswrapper[5109]: I0217 00:21:07.156494 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flxwz" event={"ID":"c55a14c8-0f4e-4283-bc15-94334c567101","Type":"ContainerDied","Data":"09e7c4ed7e5c1df18c06a0807859b1f49762884c0cd22e33f987ca80b0b67dec"} Feb 17 00:21:08 crc kubenswrapper[5109]: I0217 00:21:08.166926 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flxwz" event={"ID":"c55a14c8-0f4e-4283-bc15-94334c567101","Type":"ContainerStarted","Data":"8fba753cf13adeb51517d1b89dab98fb1ad437f894a124a9dfa635a9087a70ab"} Feb 17 00:21:08 crc kubenswrapper[5109]: I0217 00:21:08.193163 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-flxwz" podStartSLOduration=4.132094088 podStartE2EDuration="5.193134131s" podCreationTimestamp="2026-02-17 00:21:03 +0000 UTC" firstStartedPulling="2026-02-17 00:21:05.137446275 +0000 UTC m=+736.469001063" lastFinishedPulling="2026-02-17 00:21:06.198486338 +0000 UTC m=+737.530041106" observedRunningTime="2026-02-17 00:21:08.191013046 +0000 UTC m=+739.522567844" watchObservedRunningTime="2026-02-17 00:21:08.193134131 +0000 UTC m=+739.524688929" Feb 17 00:21:08 crc kubenswrapper[5109]: I0217 00:21:08.368223 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277"] Feb 17 00:21:08 crc kubenswrapper[5109]: I0217 00:21:08.369072 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="564af301-6673-4e7b-8882-b923a9df0634" containerName="extract" Feb 17 00:21:08 crc kubenswrapper[5109]: I0217 00:21:08.369100 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="564af301-6673-4e7b-8882-b923a9df0634" containerName="extract" Feb 17 00:21:08 crc kubenswrapper[5109]: I0217 00:21:08.369144 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="564af301-6673-4e7b-8882-b923a9df0634" containerName="util" Feb 17 00:21:08 crc kubenswrapper[5109]: I0217 00:21:08.369155 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="564af301-6673-4e7b-8882-b923a9df0634" containerName="util" Feb 17 00:21:08 crc kubenswrapper[5109]: I0217 00:21:08.369185 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="564af301-6673-4e7b-8882-b923a9df0634" containerName="pull" Feb 17 00:21:08 crc kubenswrapper[5109]: I0217 00:21:08.369196 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="564af301-6673-4e7b-8882-b923a9df0634" containerName="pull" Feb 17 00:21:08 crc kubenswrapper[5109]: I0217 00:21:08.369342 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="564af301-6673-4e7b-8882-b923a9df0634" containerName="extract" Feb 17 00:21:08 crc kubenswrapper[5109]: I0217 00:21:08.377146 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277"] Feb 17 00:21:08 crc kubenswrapper[5109]: I0217 00:21:08.377311 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277" Feb 17 00:21:08 crc kubenswrapper[5109]: I0217 00:21:08.379720 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Feb 17 00:21:08 crc kubenswrapper[5109]: I0217 00:21:08.491344 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1530a0bf-d290-47e8-9d2e-d85a94ff1983-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277\" (UID: \"1530a0bf-d290-47e8-9d2e-d85a94ff1983\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277" Feb 17 00:21:08 crc kubenswrapper[5109]: I0217 00:21:08.491626 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1530a0bf-d290-47e8-9d2e-d85a94ff1983-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277\" (UID: \"1530a0bf-d290-47e8-9d2e-d85a94ff1983\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277" Feb 17 00:21:08 crc kubenswrapper[5109]: I0217 00:21:08.491826 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnmtw\" (UniqueName: \"kubernetes.io/projected/1530a0bf-d290-47e8-9d2e-d85a94ff1983-kube-api-access-mnmtw\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277\" (UID: \"1530a0bf-d290-47e8-9d2e-d85a94ff1983\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277" Feb 17 00:21:08 crc kubenswrapper[5109]: I0217 00:21:08.593287 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mnmtw\" (UniqueName: \"kubernetes.io/projected/1530a0bf-d290-47e8-9d2e-d85a94ff1983-kube-api-access-mnmtw\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277\" (UID: \"1530a0bf-d290-47e8-9d2e-d85a94ff1983\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277" Feb 17 00:21:08 crc kubenswrapper[5109]: I0217 00:21:08.593363 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1530a0bf-d290-47e8-9d2e-d85a94ff1983-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277\" (UID: \"1530a0bf-d290-47e8-9d2e-d85a94ff1983\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277" Feb 17 00:21:08 crc kubenswrapper[5109]: I0217 00:21:08.593410 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1530a0bf-d290-47e8-9d2e-d85a94ff1983-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277\" (UID: \"1530a0bf-d290-47e8-9d2e-d85a94ff1983\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277" Feb 17 00:21:08 crc kubenswrapper[5109]: I0217 00:21:08.593863 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1530a0bf-d290-47e8-9d2e-d85a94ff1983-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277\" (UID: \"1530a0bf-d290-47e8-9d2e-d85a94ff1983\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277" Feb 17 00:21:08 crc kubenswrapper[5109]: I0217 00:21:08.594341 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1530a0bf-d290-47e8-9d2e-d85a94ff1983-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277\" (UID: \"1530a0bf-d290-47e8-9d2e-d85a94ff1983\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277" Feb 17 00:21:08 crc kubenswrapper[5109]: I0217 00:21:08.620271 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnmtw\" (UniqueName: \"kubernetes.io/projected/1530a0bf-d290-47e8-9d2e-d85a94ff1983-kube-api-access-mnmtw\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277\" (UID: \"1530a0bf-d290-47e8-9d2e-d85a94ff1983\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277" Feb 17 00:21:08 crc kubenswrapper[5109]: I0217 00:21:08.690943 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277" Feb 17 00:21:09 crc kubenswrapper[5109]: I0217 00:21:09.117555 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277"] Feb 17 00:21:09 crc kubenswrapper[5109]: I0217 00:21:09.174204 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277" event={"ID":"1530a0bf-d290-47e8-9d2e-d85a94ff1983","Type":"ContainerStarted","Data":"6b8ad343b116ed6a1a79d81b4cef5c4df85dfa025b6713fb8865d3acb9131e3a"} Feb 17 00:21:10 crc kubenswrapper[5109]: I0217 00:21:10.183388 5109 generic.go:358] "Generic (PLEG): container finished" podID="1530a0bf-d290-47e8-9d2e-d85a94ff1983" containerID="46c1ea2357655a8c8437f607d201ed1f9b41abbaa2f7ec5103473500b69e51de" exitCode=0 Feb 17 00:21:10 crc kubenswrapper[5109]: I0217 00:21:10.183541 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277" event={"ID":"1530a0bf-d290-47e8-9d2e-d85a94ff1983","Type":"ContainerDied","Data":"46c1ea2357655a8c8437f607d201ed1f9b41abbaa2f7ec5103473500b69e51de"} Feb 17 00:21:12 crc kubenswrapper[5109]: I0217 00:21:12.197775 5109 generic.go:358] "Generic (PLEG): container finished" podID="1530a0bf-d290-47e8-9d2e-d85a94ff1983" containerID="daac6cd3eaa9ad1b5308f1156c82517271bc21e7ed70ccd327bc8d9af22dbca4" exitCode=0 Feb 17 00:21:12 crc kubenswrapper[5109]: I0217 00:21:12.197909 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277" event={"ID":"1530a0bf-d290-47e8-9d2e-d85a94ff1983","Type":"ContainerDied","Data":"daac6cd3eaa9ad1b5308f1156c82517271bc21e7ed70ccd327bc8d9af22dbca4"} Feb 17 00:21:12 crc kubenswrapper[5109]: I0217 00:21:12.959958 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4bdrw"] Feb 17 00:21:12 crc kubenswrapper[5109]: I0217 00:21:12.966555 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bdrw" Feb 17 00:21:12 crc kubenswrapper[5109]: I0217 00:21:12.997013 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4bdrw"] Feb 17 00:21:13 crc kubenswrapper[5109]: I0217 00:21:13.047797 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqpqv\" (UniqueName: \"kubernetes.io/projected/fcceb32f-15f2-442f-a4ff-07f367afd148-kube-api-access-dqpqv\") pod \"certified-operators-4bdrw\" (UID: \"fcceb32f-15f2-442f-a4ff-07f367afd148\") " pod="openshift-marketplace/certified-operators-4bdrw" Feb 17 00:21:13 crc kubenswrapper[5109]: I0217 00:21:13.047894 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcceb32f-15f2-442f-a4ff-07f367afd148-catalog-content\") pod \"certified-operators-4bdrw\" (UID: \"fcceb32f-15f2-442f-a4ff-07f367afd148\") " pod="openshift-marketplace/certified-operators-4bdrw" Feb 17 00:21:13 crc kubenswrapper[5109]: I0217 00:21:13.048101 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcceb32f-15f2-442f-a4ff-07f367afd148-utilities\") pod \"certified-operators-4bdrw\" (UID: \"fcceb32f-15f2-442f-a4ff-07f367afd148\") " pod="openshift-marketplace/certified-operators-4bdrw" Feb 17 00:21:13 crc kubenswrapper[5109]: I0217 00:21:13.148809 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dqpqv\" (UniqueName: \"kubernetes.io/projected/fcceb32f-15f2-442f-a4ff-07f367afd148-kube-api-access-dqpqv\") pod \"certified-operators-4bdrw\" (UID: \"fcceb32f-15f2-442f-a4ff-07f367afd148\") " pod="openshift-marketplace/certified-operators-4bdrw" Feb 17 00:21:13 crc kubenswrapper[5109]: I0217 00:21:13.148860 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcceb32f-15f2-442f-a4ff-07f367afd148-catalog-content\") pod \"certified-operators-4bdrw\" (UID: \"fcceb32f-15f2-442f-a4ff-07f367afd148\") " pod="openshift-marketplace/certified-operators-4bdrw" Feb 17 00:21:13 crc kubenswrapper[5109]: I0217 00:21:13.148917 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcceb32f-15f2-442f-a4ff-07f367afd148-utilities\") pod \"certified-operators-4bdrw\" (UID: \"fcceb32f-15f2-442f-a4ff-07f367afd148\") " pod="openshift-marketplace/certified-operators-4bdrw" Feb 17 00:21:13 crc kubenswrapper[5109]: I0217 00:21:13.149869 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcceb32f-15f2-442f-a4ff-07f367afd148-catalog-content\") pod \"certified-operators-4bdrw\" (UID: \"fcceb32f-15f2-442f-a4ff-07f367afd148\") " pod="openshift-marketplace/certified-operators-4bdrw" Feb 17 00:21:13 crc kubenswrapper[5109]: I0217 00:21:13.149936 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcceb32f-15f2-442f-a4ff-07f367afd148-utilities\") pod \"certified-operators-4bdrw\" (UID: \"fcceb32f-15f2-442f-a4ff-07f367afd148\") " pod="openshift-marketplace/certified-operators-4bdrw" Feb 17 00:21:13 crc kubenswrapper[5109]: I0217 00:21:13.187558 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqpqv\" (UniqueName: \"kubernetes.io/projected/fcceb32f-15f2-442f-a4ff-07f367afd148-kube-api-access-dqpqv\") pod \"certified-operators-4bdrw\" (UID: \"fcceb32f-15f2-442f-a4ff-07f367afd148\") " pod="openshift-marketplace/certified-operators-4bdrw" Feb 17 00:21:13 crc kubenswrapper[5109]: I0217 00:21:13.205293 5109 generic.go:358] "Generic (PLEG): container finished" podID="1530a0bf-d290-47e8-9d2e-d85a94ff1983" containerID="516015efc6830075d74398b836cdaab749e74fd878930d8516ada0f4c7514960" exitCode=0 Feb 17 00:21:13 crc kubenswrapper[5109]: I0217 00:21:13.205349 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277" event={"ID":"1530a0bf-d290-47e8-9d2e-d85a94ff1983","Type":"ContainerDied","Data":"516015efc6830075d74398b836cdaab749e74fd878930d8516ada0f4c7514960"} Feb 17 00:21:13 crc kubenswrapper[5109]: I0217 00:21:13.280235 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bdrw" Feb 17 00:21:13 crc kubenswrapper[5109]: I0217 00:21:13.572697 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4bdrw"] Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.092761 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-flxwz" Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.093101 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-flxwz" Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.211361 5109 generic.go:358] "Generic (PLEG): container finished" podID="fcceb32f-15f2-442f-a4ff-07f367afd148" containerID="2f05b706eb8510635c0ffe1b1ee99c2820a876e7aaf6db1cb9f77a8b7e0ea015" exitCode=0 Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.211486 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bdrw" event={"ID":"fcceb32f-15f2-442f-a4ff-07f367afd148","Type":"ContainerDied","Data":"2f05b706eb8510635c0ffe1b1ee99c2820a876e7aaf6db1cb9f77a8b7e0ea015"} Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.211544 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bdrw" event={"ID":"fcceb32f-15f2-442f-a4ff-07f367afd148","Type":"ContainerStarted","Data":"dd32c8859fa58616a4713437f9456749f072609c9e7fd3adeebc04eb4773797d"} Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.546027 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277" Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.563319 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnmtw\" (UniqueName: \"kubernetes.io/projected/1530a0bf-d290-47e8-9d2e-d85a94ff1983-kube-api-access-mnmtw\") pod \"1530a0bf-d290-47e8-9d2e-d85a94ff1983\" (UID: \"1530a0bf-d290-47e8-9d2e-d85a94ff1983\") " Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.563370 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1530a0bf-d290-47e8-9d2e-d85a94ff1983-bundle\") pod \"1530a0bf-d290-47e8-9d2e-d85a94ff1983\" (UID: \"1530a0bf-d290-47e8-9d2e-d85a94ff1983\") " Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.563397 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1530a0bf-d290-47e8-9d2e-d85a94ff1983-util\") pod \"1530a0bf-d290-47e8-9d2e-d85a94ff1983\" (UID: \"1530a0bf-d290-47e8-9d2e-d85a94ff1983\") " Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.564235 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1530a0bf-d290-47e8-9d2e-d85a94ff1983-bundle" (OuterVolumeSpecName: "bundle") pod "1530a0bf-d290-47e8-9d2e-d85a94ff1983" (UID: "1530a0bf-d290-47e8-9d2e-d85a94ff1983"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.569747 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1530a0bf-d290-47e8-9d2e-d85a94ff1983-kube-api-access-mnmtw" (OuterVolumeSpecName: "kube-api-access-mnmtw") pod "1530a0bf-d290-47e8-9d2e-d85a94ff1983" (UID: "1530a0bf-d290-47e8-9d2e-d85a94ff1983"). InnerVolumeSpecName "kube-api-access-mnmtw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.599770 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x"] Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.600523 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1530a0bf-d290-47e8-9d2e-d85a94ff1983" containerName="util" Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.600622 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="1530a0bf-d290-47e8-9d2e-d85a94ff1983" containerName="util" Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.600730 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1530a0bf-d290-47e8-9d2e-d85a94ff1983" containerName="pull" Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.600799 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="1530a0bf-d290-47e8-9d2e-d85a94ff1983" containerName="pull" Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.600871 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1530a0bf-d290-47e8-9d2e-d85a94ff1983" containerName="extract" Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.600925 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="1530a0bf-d290-47e8-9d2e-d85a94ff1983" containerName="extract" Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.601058 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="1530a0bf-d290-47e8-9d2e-d85a94ff1983" containerName="extract" Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.664243 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mnmtw\" (UniqueName: \"kubernetes.io/projected/1530a0bf-d290-47e8-9d2e-d85a94ff1983-kube-api-access-mnmtw\") on node \"crc\" DevicePath \"\"" Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.664279 5109 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1530a0bf-d290-47e8-9d2e-d85a94ff1983-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.706919 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x"] Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.707100 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x" Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.765053 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/049a8588-d5af-44ed-a85c-77a01850a79d-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x\" (UID: \"049a8588-d5af-44ed-a85c-77a01850a79d\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x" Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.765138 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/049a8588-d5af-44ed-a85c-77a01850a79d-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x\" (UID: \"049a8588-d5af-44ed-a85c-77a01850a79d\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x" Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.765172 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrxsp\" (UniqueName: \"kubernetes.io/projected/049a8588-d5af-44ed-a85c-77a01850a79d-kube-api-access-hrxsp\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x\" (UID: \"049a8588-d5af-44ed-a85c-77a01850a79d\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x" Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.844866 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1530a0bf-d290-47e8-9d2e-d85a94ff1983-util" (OuterVolumeSpecName: "util") pod "1530a0bf-d290-47e8-9d2e-d85a94ff1983" (UID: "1530a0bf-d290-47e8-9d2e-d85a94ff1983"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.865979 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/049a8588-d5af-44ed-a85c-77a01850a79d-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x\" (UID: \"049a8588-d5af-44ed-a85c-77a01850a79d\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x" Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.866302 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/049a8588-d5af-44ed-a85c-77a01850a79d-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x\" (UID: \"049a8588-d5af-44ed-a85c-77a01850a79d\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x" Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.866435 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hrxsp\" (UniqueName: \"kubernetes.io/projected/049a8588-d5af-44ed-a85c-77a01850a79d-kube-api-access-hrxsp\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x\" (UID: \"049a8588-d5af-44ed-a85c-77a01850a79d\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x" Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.866581 5109 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1530a0bf-d290-47e8-9d2e-d85a94ff1983-util\") on node \"crc\" DevicePath \"\"" Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.866814 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/049a8588-d5af-44ed-a85c-77a01850a79d-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x\" (UID: \"049a8588-d5af-44ed-a85c-77a01850a79d\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x" Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.866820 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/049a8588-d5af-44ed-a85c-77a01850a79d-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x\" (UID: \"049a8588-d5af-44ed-a85c-77a01850a79d\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x" Feb 17 00:21:14 crc kubenswrapper[5109]: I0217 00:21:14.884338 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrxsp\" (UniqueName: \"kubernetes.io/projected/049a8588-d5af-44ed-a85c-77a01850a79d-kube-api-access-hrxsp\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x\" (UID: \"049a8588-d5af-44ed-a85c-77a01850a79d\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x" Feb 17 00:21:15 crc kubenswrapper[5109]: I0217 00:21:15.022742 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x" Feb 17 00:21:15 crc kubenswrapper[5109]: I0217 00:21:15.139326 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-flxwz" podUID="c55a14c8-0f4e-4283-bc15-94334c567101" containerName="registry-server" probeResult="failure" output=< Feb 17 00:21:15 crc kubenswrapper[5109]: timeout: failed to connect service ":50051" within 1s Feb 17 00:21:15 crc kubenswrapper[5109]: > Feb 17 00:21:15 crc kubenswrapper[5109]: I0217 00:21:15.224908 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277" event={"ID":"1530a0bf-d290-47e8-9d2e-d85a94ff1983","Type":"ContainerDied","Data":"6b8ad343b116ed6a1a79d81b4cef5c4df85dfa025b6713fb8865d3acb9131e3a"} Feb 17 00:21:15 crc kubenswrapper[5109]: I0217 00:21:15.224943 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b8ad343b116ed6a1a79d81b4cef5c4df85dfa025b6713fb8865d3acb9131e3a" Feb 17 00:21:15 crc kubenswrapper[5109]: I0217 00:21:15.225024 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277" Feb 17 00:21:15 crc kubenswrapper[5109]: I0217 00:21:15.345199 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x"] Feb 17 00:21:15 crc kubenswrapper[5109]: W0217 00:21:15.359288 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod049a8588_d5af_44ed_a85c_77a01850a79d.slice/crio-79cd2b642b623ab8f4166488a9381db57939e32c8e828bbf2c669dd434a622ce WatchSource:0}: Error finding container 79cd2b642b623ab8f4166488a9381db57939e32c8e828bbf2c669dd434a622ce: Status 404 returned error can't find the container with id 79cd2b642b623ab8f4166488a9381db57939e32c8e828bbf2c669dd434a622ce Feb 17 00:21:16 crc kubenswrapper[5109]: I0217 00:21:16.231743 5109 generic.go:358] "Generic (PLEG): container finished" podID="049a8588-d5af-44ed-a85c-77a01850a79d" containerID="a5d119cf80f54e967a5cbe2f5376782b72e164f77560a0e69d11d3a5606e0f51" exitCode=0 Feb 17 00:21:16 crc kubenswrapper[5109]: I0217 00:21:16.231861 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x" event={"ID":"049a8588-d5af-44ed-a85c-77a01850a79d","Type":"ContainerDied","Data":"a5d119cf80f54e967a5cbe2f5376782b72e164f77560a0e69d11d3a5606e0f51"} Feb 17 00:21:16 crc kubenswrapper[5109]: I0217 00:21:16.231927 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x" event={"ID":"049a8588-d5af-44ed-a85c-77a01850a79d","Type":"ContainerStarted","Data":"79cd2b642b623ab8f4166488a9381db57939e32c8e828bbf2c669dd434a622ce"} Feb 17 00:21:16 crc kubenswrapper[5109]: I0217 00:21:16.234368 5109 generic.go:358] "Generic (PLEG): container finished" podID="fcceb32f-15f2-442f-a4ff-07f367afd148" containerID="0bad16c05b412ba5b661d049a2cb24bbac6b8a05277210c17f04ccb8bf4f0b76" exitCode=0 Feb 17 00:21:16 crc kubenswrapper[5109]: I0217 00:21:16.234464 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bdrw" event={"ID":"fcceb32f-15f2-442f-a4ff-07f367afd148","Type":"ContainerDied","Data":"0bad16c05b412ba5b661d049a2cb24bbac6b8a05277210c17f04ccb8bf4f0b76"} Feb 17 00:21:17 crc kubenswrapper[5109]: I0217 00:21:17.242970 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bdrw" event={"ID":"fcceb32f-15f2-442f-a4ff-07f367afd148","Type":"ContainerStarted","Data":"d9d4a39ca97afcf5af32528c7faac2dcabfd24c1d0b7356845fea9822aee2efb"} Feb 17 00:21:17 crc kubenswrapper[5109]: I0217 00:21:17.261801 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4bdrw" podStartSLOduration=4.272838543 podStartE2EDuration="5.261785346s" podCreationTimestamp="2026-02-17 00:21:12 +0000 UTC" firstStartedPulling="2026-02-17 00:21:14.212109783 +0000 UTC m=+745.543664541" lastFinishedPulling="2026-02-17 00:21:15.201056586 +0000 UTC m=+746.532611344" observedRunningTime="2026-02-17 00:21:17.258668717 +0000 UTC m=+748.590223475" watchObservedRunningTime="2026-02-17 00:21:17.261785346 +0000 UTC m=+748.593340104" Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.597956 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-vvp9n"] Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.628399 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-vvp9n"] Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.628560 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-vvp9n" Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.632729 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-vxck6\"" Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.633083 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.633405 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.640803 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5f8ff5bdf8-jwn5l"] Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.681184 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5f8ff5bdf8-jwn5l"] Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.681227 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5f8ff5bdf8-mgg2t"] Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.681358 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f8ff5bdf8-jwn5l" Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.686517 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-gt6r7\"" Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.686732 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.697670 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5f8ff5bdf8-mgg2t"] Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.697828 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f8ff5bdf8-mgg2t" Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.714345 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6877d358-6ca8-41b1-8eac-69453394b64b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5f8ff5bdf8-jwn5l\" (UID: \"6877d358-6ca8-41b1-8eac-69453394b64b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f8ff5bdf8-jwn5l" Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.714412 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/642eed45-4b50-4af0-9656-27559798d21c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5f8ff5bdf8-mgg2t\" (UID: \"642eed45-4b50-4af0-9656-27559798d21c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f8ff5bdf8-mgg2t" Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.714457 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/642eed45-4b50-4af0-9656-27559798d21c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5f8ff5bdf8-mgg2t\" (UID: \"642eed45-4b50-4af0-9656-27559798d21c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f8ff5bdf8-mgg2t" Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.714489 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt52s\" (UniqueName: \"kubernetes.io/projected/034835f5-e853-4070-9944-045206b1b990-kube-api-access-rt52s\") pod \"obo-prometheus-operator-9bc85b4bf-vvp9n\" (UID: \"034835f5-e853-4070-9944-045206b1b990\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-vvp9n" Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.714515 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6877d358-6ca8-41b1-8eac-69453394b64b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5f8ff5bdf8-jwn5l\" (UID: \"6877d358-6ca8-41b1-8eac-69453394b64b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f8ff5bdf8-jwn5l" Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.815433 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/642eed45-4b50-4af0-9656-27559798d21c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5f8ff5bdf8-mgg2t\" (UID: \"642eed45-4b50-4af0-9656-27559798d21c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f8ff5bdf8-mgg2t" Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.815502 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/642eed45-4b50-4af0-9656-27559798d21c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5f8ff5bdf8-mgg2t\" (UID: \"642eed45-4b50-4af0-9656-27559798d21c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f8ff5bdf8-mgg2t" Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.815544 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rt52s\" (UniqueName: \"kubernetes.io/projected/034835f5-e853-4070-9944-045206b1b990-kube-api-access-rt52s\") pod \"obo-prometheus-operator-9bc85b4bf-vvp9n\" (UID: \"034835f5-e853-4070-9944-045206b1b990\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-vvp9n" Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.815578 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6877d358-6ca8-41b1-8eac-69453394b64b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5f8ff5bdf8-jwn5l\" (UID: \"6877d358-6ca8-41b1-8eac-69453394b64b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f8ff5bdf8-jwn5l" Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.815984 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6877d358-6ca8-41b1-8eac-69453394b64b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5f8ff5bdf8-jwn5l\" (UID: \"6877d358-6ca8-41b1-8eac-69453394b64b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f8ff5bdf8-jwn5l" Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.827520 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-85c68dddb-gtfv5"] Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.827698 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6877d358-6ca8-41b1-8eac-69453394b64b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5f8ff5bdf8-jwn5l\" (UID: \"6877d358-6ca8-41b1-8eac-69453394b64b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f8ff5bdf8-jwn5l" Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.828026 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6877d358-6ca8-41b1-8eac-69453394b64b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5f8ff5bdf8-jwn5l\" (UID: \"6877d358-6ca8-41b1-8eac-69453394b64b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f8ff5bdf8-jwn5l" Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.832195 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/642eed45-4b50-4af0-9656-27559798d21c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5f8ff5bdf8-mgg2t\" (UID: \"642eed45-4b50-4af0-9656-27559798d21c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f8ff5bdf8-mgg2t" Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.834002 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/642eed45-4b50-4af0-9656-27559798d21c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5f8ff5bdf8-mgg2t\" (UID: \"642eed45-4b50-4af0-9656-27559798d21c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f8ff5bdf8-mgg2t" Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.842535 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt52s\" (UniqueName: \"kubernetes.io/projected/034835f5-e853-4070-9944-045206b1b990-kube-api-access-rt52s\") pod \"obo-prometheus-operator-9bc85b4bf-vvp9n\" (UID: \"034835f5-e853-4070-9944-045206b1b990\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-vvp9n" Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.859517 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-gtfv5"] Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.859953 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-gtfv5" Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.864622 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.864878 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-vhrht\"" Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.918417 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/f517e4ec-97f5-4f18-8ea5-f1f3b37dd332-observability-operator-tls\") pod \"observability-operator-85c68dddb-gtfv5\" (UID: \"f517e4ec-97f5-4f18-8ea5-f1f3b37dd332\") " pod="openshift-operators/observability-operator-85c68dddb-gtfv5" Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.918480 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lll2\" (UniqueName: \"kubernetes.io/projected/f517e4ec-97f5-4f18-8ea5-f1f3b37dd332-kube-api-access-2lll2\") pod \"observability-operator-85c68dddb-gtfv5\" (UID: \"f517e4ec-97f5-4f18-8ea5-f1f3b37dd332\") " pod="openshift-operators/observability-operator-85c68dddb-gtfv5" Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.948961 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-vvp9n" Feb 17 00:21:18 crc kubenswrapper[5109]: I0217 00:21:18.998434 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f8ff5bdf8-jwn5l" Feb 17 00:21:19 crc kubenswrapper[5109]: I0217 00:21:19.019248 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/f517e4ec-97f5-4f18-8ea5-f1f3b37dd332-observability-operator-tls\") pod \"observability-operator-85c68dddb-gtfv5\" (UID: \"f517e4ec-97f5-4f18-8ea5-f1f3b37dd332\") " pod="openshift-operators/observability-operator-85c68dddb-gtfv5" Feb 17 00:21:19 crc kubenswrapper[5109]: I0217 00:21:19.019287 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2lll2\" (UniqueName: \"kubernetes.io/projected/f517e4ec-97f5-4f18-8ea5-f1f3b37dd332-kube-api-access-2lll2\") pod \"observability-operator-85c68dddb-gtfv5\" (UID: \"f517e4ec-97f5-4f18-8ea5-f1f3b37dd332\") " pod="openshift-operators/observability-operator-85c68dddb-gtfv5" Feb 17 00:21:19 crc kubenswrapper[5109]: I0217 00:21:19.024358 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f8ff5bdf8-mgg2t" Feb 17 00:21:19 crc kubenswrapper[5109]: I0217 00:21:19.030274 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/f517e4ec-97f5-4f18-8ea5-f1f3b37dd332-observability-operator-tls\") pod \"observability-operator-85c68dddb-gtfv5\" (UID: \"f517e4ec-97f5-4f18-8ea5-f1f3b37dd332\") " pod="openshift-operators/observability-operator-85c68dddb-gtfv5" Feb 17 00:21:19 crc kubenswrapper[5109]: I0217 00:21:19.040516 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lll2\" (UniqueName: \"kubernetes.io/projected/f517e4ec-97f5-4f18-8ea5-f1f3b37dd332-kube-api-access-2lll2\") pod \"observability-operator-85c68dddb-gtfv5\" (UID: \"f517e4ec-97f5-4f18-8ea5-f1f3b37dd332\") " pod="openshift-operators/observability-operator-85c68dddb-gtfv5" Feb 17 00:21:19 crc kubenswrapper[5109]: I0217 00:21:19.087174 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-bgfhc"] Feb 17 00:21:19 crc kubenswrapper[5109]: I0217 00:21:19.112239 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-bgfhc"] Feb 17 00:21:19 crc kubenswrapper[5109]: I0217 00:21:19.112407 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-bgfhc" Feb 17 00:21:19 crc kubenswrapper[5109]: I0217 00:21:19.121002 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-4fnzf\"" Feb 17 00:21:19 crc kubenswrapper[5109]: I0217 00:21:19.205862 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-gtfv5" Feb 17 00:21:19 crc kubenswrapper[5109]: I0217 00:21:19.221966 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw7cg\" (UniqueName: \"kubernetes.io/projected/52985de1-df17-4d46-8b16-07bedfc870c0-kube-api-access-xw7cg\") pod \"perses-operator-669c9f96b5-bgfhc\" (UID: \"52985de1-df17-4d46-8b16-07bedfc870c0\") " pod="openshift-operators/perses-operator-669c9f96b5-bgfhc" Feb 17 00:21:19 crc kubenswrapper[5109]: I0217 00:21:19.222026 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/52985de1-df17-4d46-8b16-07bedfc870c0-openshift-service-ca\") pod \"perses-operator-669c9f96b5-bgfhc\" (UID: \"52985de1-df17-4d46-8b16-07bedfc870c0\") " pod="openshift-operators/perses-operator-669c9f96b5-bgfhc" Feb 17 00:21:19 crc kubenswrapper[5109]: I0217 00:21:19.323231 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xw7cg\" (UniqueName: \"kubernetes.io/projected/52985de1-df17-4d46-8b16-07bedfc870c0-kube-api-access-xw7cg\") pod \"perses-operator-669c9f96b5-bgfhc\" (UID: \"52985de1-df17-4d46-8b16-07bedfc870c0\") " pod="openshift-operators/perses-operator-669c9f96b5-bgfhc" Feb 17 00:21:19 crc kubenswrapper[5109]: I0217 00:21:19.323293 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/52985de1-df17-4d46-8b16-07bedfc870c0-openshift-service-ca\") pod \"perses-operator-669c9f96b5-bgfhc\" (UID: \"52985de1-df17-4d46-8b16-07bedfc870c0\") " pod="openshift-operators/perses-operator-669c9f96b5-bgfhc" Feb 17 00:21:19 crc kubenswrapper[5109]: I0217 00:21:19.324080 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/52985de1-df17-4d46-8b16-07bedfc870c0-openshift-service-ca\") pod \"perses-operator-669c9f96b5-bgfhc\" (UID: \"52985de1-df17-4d46-8b16-07bedfc870c0\") " pod="openshift-operators/perses-operator-669c9f96b5-bgfhc" Feb 17 00:21:19 crc kubenswrapper[5109]: I0217 00:21:19.344169 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw7cg\" (UniqueName: \"kubernetes.io/projected/52985de1-df17-4d46-8b16-07bedfc870c0-kube-api-access-xw7cg\") pod \"perses-operator-669c9f96b5-bgfhc\" (UID: \"52985de1-df17-4d46-8b16-07bedfc870c0\") " pod="openshift-operators/perses-operator-669c9f96b5-bgfhc" Feb 17 00:21:19 crc kubenswrapper[5109]: I0217 00:21:19.436049 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-bgfhc" Feb 17 00:21:21 crc kubenswrapper[5109]: I0217 00:21:21.967655 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5f8ff5bdf8-jwn5l"] Feb 17 00:21:21 crc kubenswrapper[5109]: W0217 00:21:21.983340 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6877d358_6ca8_41b1_8eac_69453394b64b.slice/crio-efe17285f21d898320fabcfb6de375d204556f25b92ec1bc64bf69f9c3e80a5b WatchSource:0}: Error finding container efe17285f21d898320fabcfb6de375d204556f25b92ec1bc64bf69f9c3e80a5b: Status 404 returned error can't find the container with id efe17285f21d898320fabcfb6de375d204556f25b92ec1bc64bf69f9c3e80a5b Feb 17 00:21:22 crc kubenswrapper[5109]: I0217 00:21:22.076021 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-gtfv5"] Feb 17 00:21:22 crc kubenswrapper[5109]: I0217 00:21:22.118638 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5f8ff5bdf8-mgg2t"] Feb 17 00:21:22 crc kubenswrapper[5109]: I0217 00:21:22.270301 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-gtfv5" event={"ID":"f517e4ec-97f5-4f18-8ea5-f1f3b37dd332","Type":"ContainerStarted","Data":"2e3876ba4711084c33df76b1e06f7c855a2e944fc5f0dac3be65c1dbfc151a14"} Feb 17 00:21:22 crc kubenswrapper[5109]: I0217 00:21:22.272051 5109 generic.go:358] "Generic (PLEG): container finished" podID="049a8588-d5af-44ed-a85c-77a01850a79d" containerID="12b8a5fd57189335e76b6f4e1160ac18d0fe961dabd100919df0ddb809115914" exitCode=0 Feb 17 00:21:22 crc kubenswrapper[5109]: I0217 00:21:22.272135 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x" event={"ID":"049a8588-d5af-44ed-a85c-77a01850a79d","Type":"ContainerDied","Data":"12b8a5fd57189335e76b6f4e1160ac18d0fe961dabd100919df0ddb809115914"} Feb 17 00:21:22 crc kubenswrapper[5109]: I0217 00:21:22.273627 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f8ff5bdf8-mgg2t" event={"ID":"642eed45-4b50-4af0-9656-27559798d21c","Type":"ContainerStarted","Data":"a781d787b116fda83555ec71cf5cdfd07c824a581d16b5824f8e1d2c1d3e13a0"} Feb 17 00:21:22 crc kubenswrapper[5109]: I0217 00:21:22.274688 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f8ff5bdf8-jwn5l" event={"ID":"6877d358-6ca8-41b1-8eac-69453394b64b","Type":"ContainerStarted","Data":"efe17285f21d898320fabcfb6de375d204556f25b92ec1bc64bf69f9c3e80a5b"} Feb 17 00:21:22 crc kubenswrapper[5109]: I0217 00:21:22.362311 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-567b56b95-744lx"] Feb 17 00:21:22 crc kubenswrapper[5109]: I0217 00:21:22.371740 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-567b56b95-744lx" Feb 17 00:21:22 crc kubenswrapper[5109]: I0217 00:21:22.376233 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Feb 17 00:21:22 crc kubenswrapper[5109]: I0217 00:21:22.376643 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-rmknp\"" Feb 17 00:21:22 crc kubenswrapper[5109]: I0217 00:21:22.377278 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Feb 17 00:21:22 crc kubenswrapper[5109]: I0217 00:21:22.395789 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Feb 17 00:21:22 crc kubenswrapper[5109]: W0217 00:21:22.400272 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod034835f5_e853_4070_9944_045206b1b990.slice/crio-c21460c90d3b6cb34e2c684e5c0e737acfa1372a4b36cce2054c25f194da8983 WatchSource:0}: Error finding container c21460c90d3b6cb34e2c684e5c0e737acfa1372a4b36cce2054c25f194da8983: Status 404 returned error can't find the container with id c21460c90d3b6cb34e2c684e5c0e737acfa1372a4b36cce2054c25f194da8983 Feb 17 00:21:22 crc kubenswrapper[5109]: I0217 00:21:22.400330 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-vvp9n"] Feb 17 00:21:22 crc kubenswrapper[5109]: W0217 00:21:22.400661 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52985de1_df17_4d46_8b16_07bedfc870c0.slice/crio-f75cc3ae169d362fddcbd932e567a951c1942abbc31a986e606731c358a909ee WatchSource:0}: Error finding container f75cc3ae169d362fddcbd932e567a951c1942abbc31a986e606731c358a909ee: Status 404 returned error can't find the container with id f75cc3ae169d362fddcbd932e567a951c1942abbc31a986e606731c358a909ee Feb 17 00:21:22 crc kubenswrapper[5109]: I0217 00:21:22.405778 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-567b56b95-744lx"] Feb 17 00:21:22 crc kubenswrapper[5109]: I0217 00:21:22.418315 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-bgfhc"] Feb 17 00:21:22 crc kubenswrapper[5109]: I0217 00:21:22.490036 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c14a4db7-e671-4c26-a917-082ebc150072-apiservice-cert\") pod \"elastic-operator-567b56b95-744lx\" (UID: \"c14a4db7-e671-4c26-a917-082ebc150072\") " pod="service-telemetry/elastic-operator-567b56b95-744lx" Feb 17 00:21:22 crc kubenswrapper[5109]: I0217 00:21:22.490089 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c14a4db7-e671-4c26-a917-082ebc150072-webhook-cert\") pod \"elastic-operator-567b56b95-744lx\" (UID: \"c14a4db7-e671-4c26-a917-082ebc150072\") " pod="service-telemetry/elastic-operator-567b56b95-744lx" Feb 17 00:21:22 crc kubenswrapper[5109]: I0217 00:21:22.490290 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h299z\" (UniqueName: \"kubernetes.io/projected/c14a4db7-e671-4c26-a917-082ebc150072-kube-api-access-h299z\") pod \"elastic-operator-567b56b95-744lx\" (UID: \"c14a4db7-e671-4c26-a917-082ebc150072\") " pod="service-telemetry/elastic-operator-567b56b95-744lx" Feb 17 00:21:22 crc kubenswrapper[5109]: I0217 00:21:22.591555 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c14a4db7-e671-4c26-a917-082ebc150072-apiservice-cert\") pod \"elastic-operator-567b56b95-744lx\" (UID: \"c14a4db7-e671-4c26-a917-082ebc150072\") " pod="service-telemetry/elastic-operator-567b56b95-744lx" Feb 17 00:21:22 crc kubenswrapper[5109]: I0217 00:21:22.591627 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c14a4db7-e671-4c26-a917-082ebc150072-webhook-cert\") pod \"elastic-operator-567b56b95-744lx\" (UID: \"c14a4db7-e671-4c26-a917-082ebc150072\") " pod="service-telemetry/elastic-operator-567b56b95-744lx" Feb 17 00:21:22 crc kubenswrapper[5109]: I0217 00:21:22.591670 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h299z\" (UniqueName: \"kubernetes.io/projected/c14a4db7-e671-4c26-a917-082ebc150072-kube-api-access-h299z\") pod \"elastic-operator-567b56b95-744lx\" (UID: \"c14a4db7-e671-4c26-a917-082ebc150072\") " pod="service-telemetry/elastic-operator-567b56b95-744lx" Feb 17 00:21:22 crc kubenswrapper[5109]: I0217 00:21:22.599323 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c14a4db7-e671-4c26-a917-082ebc150072-apiservice-cert\") pod \"elastic-operator-567b56b95-744lx\" (UID: \"c14a4db7-e671-4c26-a917-082ebc150072\") " pod="service-telemetry/elastic-operator-567b56b95-744lx" Feb 17 00:21:22 crc kubenswrapper[5109]: I0217 00:21:22.599338 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c14a4db7-e671-4c26-a917-082ebc150072-webhook-cert\") pod \"elastic-operator-567b56b95-744lx\" (UID: \"c14a4db7-e671-4c26-a917-082ebc150072\") " pod="service-telemetry/elastic-operator-567b56b95-744lx" Feb 17 00:21:22 crc kubenswrapper[5109]: I0217 00:21:22.613566 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h299z\" (UniqueName: \"kubernetes.io/projected/c14a4db7-e671-4c26-a917-082ebc150072-kube-api-access-h299z\") pod \"elastic-operator-567b56b95-744lx\" (UID: \"c14a4db7-e671-4c26-a917-082ebc150072\") " pod="service-telemetry/elastic-operator-567b56b95-744lx" Feb 17 00:21:22 crc kubenswrapper[5109]: I0217 00:21:22.701769 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-567b56b95-744lx" Feb 17 00:21:22 crc kubenswrapper[5109]: I0217 00:21:22.983304 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-567b56b95-744lx"] Feb 17 00:21:22 crc kubenswrapper[5109]: W0217 00:21:22.998855 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc14a4db7_e671_4c26_a917_082ebc150072.slice/crio-d734fcaf94298c4da10764b77eff2761c24bfc863c0aa35eed69fb4e898a4db1 WatchSource:0}: Error finding container d734fcaf94298c4da10764b77eff2761c24bfc863c0aa35eed69fb4e898a4db1: Status 404 returned error can't find the container with id d734fcaf94298c4da10764b77eff2761c24bfc863c0aa35eed69fb4e898a4db1 Feb 17 00:21:23 crc kubenswrapper[5109]: I0217 00:21:23.284966 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4bdrw" Feb 17 00:21:23 crc kubenswrapper[5109]: I0217 00:21:23.285287 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-4bdrw" Feb 17 00:21:23 crc kubenswrapper[5109]: I0217 00:21:23.298490 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-567b56b95-744lx" event={"ID":"c14a4db7-e671-4c26-a917-082ebc150072","Type":"ContainerStarted","Data":"d734fcaf94298c4da10764b77eff2761c24bfc863c0aa35eed69fb4e898a4db1"} Feb 17 00:21:23 crc kubenswrapper[5109]: I0217 00:21:23.304952 5109 generic.go:358] "Generic (PLEG): container finished" podID="049a8588-d5af-44ed-a85c-77a01850a79d" containerID="598e08c1de6d40678f7d59d2aa5733c1e0a9ed86985ef38d06102d35a60be82c" exitCode=0 Feb 17 00:21:23 crc kubenswrapper[5109]: I0217 00:21:23.305094 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x" event={"ID":"049a8588-d5af-44ed-a85c-77a01850a79d","Type":"ContainerDied","Data":"598e08c1de6d40678f7d59d2aa5733c1e0a9ed86985ef38d06102d35a60be82c"} Feb 17 00:21:23 crc kubenswrapper[5109]: I0217 00:21:23.308199 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-vvp9n" event={"ID":"034835f5-e853-4070-9944-045206b1b990","Type":"ContainerStarted","Data":"c21460c90d3b6cb34e2c684e5c0e737acfa1372a4b36cce2054c25f194da8983"} Feb 17 00:21:23 crc kubenswrapper[5109]: I0217 00:21:23.314042 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-bgfhc" event={"ID":"52985de1-df17-4d46-8b16-07bedfc870c0","Type":"ContainerStarted","Data":"f75cc3ae169d362fddcbd932e567a951c1942abbc31a986e606731c358a909ee"} Feb 17 00:21:23 crc kubenswrapper[5109]: I0217 00:21:23.359668 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4bdrw" Feb 17 00:21:24 crc kubenswrapper[5109]: I0217 00:21:24.173756 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-flxwz" Feb 17 00:21:24 crc kubenswrapper[5109]: I0217 00:21:24.216217 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-flxwz" Feb 17 00:21:24 crc kubenswrapper[5109]: I0217 00:21:24.382820 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4bdrw" Feb 17 00:21:24 crc kubenswrapper[5109]: I0217 00:21:24.611411 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x" Feb 17 00:21:24 crc kubenswrapper[5109]: I0217 00:21:24.640406 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/049a8588-d5af-44ed-a85c-77a01850a79d-bundle\") pod \"049a8588-d5af-44ed-a85c-77a01850a79d\" (UID: \"049a8588-d5af-44ed-a85c-77a01850a79d\") " Feb 17 00:21:24 crc kubenswrapper[5109]: I0217 00:21:24.640531 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrxsp\" (UniqueName: \"kubernetes.io/projected/049a8588-d5af-44ed-a85c-77a01850a79d-kube-api-access-hrxsp\") pod \"049a8588-d5af-44ed-a85c-77a01850a79d\" (UID: \"049a8588-d5af-44ed-a85c-77a01850a79d\") " Feb 17 00:21:24 crc kubenswrapper[5109]: I0217 00:21:24.640576 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/049a8588-d5af-44ed-a85c-77a01850a79d-util\") pod \"049a8588-d5af-44ed-a85c-77a01850a79d\" (UID: \"049a8588-d5af-44ed-a85c-77a01850a79d\") " Feb 17 00:21:24 crc kubenswrapper[5109]: I0217 00:21:24.642283 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/049a8588-d5af-44ed-a85c-77a01850a79d-bundle" (OuterVolumeSpecName: "bundle") pod "049a8588-d5af-44ed-a85c-77a01850a79d" (UID: "049a8588-d5af-44ed-a85c-77a01850a79d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:21:24 crc kubenswrapper[5109]: I0217 00:21:24.646730 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/049a8588-d5af-44ed-a85c-77a01850a79d-kube-api-access-hrxsp" (OuterVolumeSpecName: "kube-api-access-hrxsp") pod "049a8588-d5af-44ed-a85c-77a01850a79d" (UID: "049a8588-d5af-44ed-a85c-77a01850a79d"). InnerVolumeSpecName "kube-api-access-hrxsp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:21:24 crc kubenswrapper[5109]: I0217 00:21:24.662411 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/049a8588-d5af-44ed-a85c-77a01850a79d-util" (OuterVolumeSpecName: "util") pod "049a8588-d5af-44ed-a85c-77a01850a79d" (UID: "049a8588-d5af-44ed-a85c-77a01850a79d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:21:24 crc kubenswrapper[5109]: I0217 00:21:24.742800 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hrxsp\" (UniqueName: \"kubernetes.io/projected/049a8588-d5af-44ed-a85c-77a01850a79d-kube-api-access-hrxsp\") on node \"crc\" DevicePath \"\"" Feb 17 00:21:24 crc kubenswrapper[5109]: I0217 00:21:24.743034 5109 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/049a8588-d5af-44ed-a85c-77a01850a79d-util\") on node \"crc\" DevicePath \"\"" Feb 17 00:21:24 crc kubenswrapper[5109]: I0217 00:21:24.743046 5109 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/049a8588-d5af-44ed-a85c-77a01850a79d-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:21:25 crc kubenswrapper[5109]: I0217 00:21:25.302091 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-flxwz"] Feb 17 00:21:25 crc kubenswrapper[5109]: I0217 00:21:25.342194 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x" Feb 17 00:21:25 crc kubenswrapper[5109]: I0217 00:21:25.342194 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x" event={"ID":"049a8588-d5af-44ed-a85c-77a01850a79d","Type":"ContainerDied","Data":"79cd2b642b623ab8f4166488a9381db57939e32c8e828bbf2c669dd434a622ce"} Feb 17 00:21:25 crc kubenswrapper[5109]: I0217 00:21:25.342351 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79cd2b642b623ab8f4166488a9381db57939e32c8e828bbf2c669dd434a622ce" Feb 17 00:21:25 crc kubenswrapper[5109]: I0217 00:21:25.342778 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-flxwz" podUID="c55a14c8-0f4e-4283-bc15-94334c567101" containerName="registry-server" containerID="cri-o://8fba753cf13adeb51517d1b89dab98fb1ad437f894a124a9dfa635a9087a70ab" gracePeriod=2 Feb 17 00:21:26 crc kubenswrapper[5109]: I0217 00:21:26.355428 5109 generic.go:358] "Generic (PLEG): container finished" podID="c55a14c8-0f4e-4283-bc15-94334c567101" containerID="8fba753cf13adeb51517d1b89dab98fb1ad437f894a124a9dfa635a9087a70ab" exitCode=0 Feb 17 00:21:26 crc kubenswrapper[5109]: I0217 00:21:26.355687 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flxwz" event={"ID":"c55a14c8-0f4e-4283-bc15-94334c567101","Type":"ContainerDied","Data":"8fba753cf13adeb51517d1b89dab98fb1ad437f894a124a9dfa635a9087a70ab"} Feb 17 00:21:27 crc kubenswrapper[5109]: I0217 00:21:27.504383 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4bdrw"] Feb 17 00:21:27 crc kubenswrapper[5109]: I0217 00:21:27.504702 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4bdrw" podUID="fcceb32f-15f2-442f-a4ff-07f367afd148" containerName="registry-server" containerID="cri-o://d9d4a39ca97afcf5af32528c7faac2dcabfd24c1d0b7356845fea9822aee2efb" gracePeriod=2 Feb 17 00:21:28 crc kubenswrapper[5109]: I0217 00:21:28.177746 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-flxwz" Feb 17 00:21:28 crc kubenswrapper[5109]: I0217 00:21:28.197740 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg2sf\" (UniqueName: \"kubernetes.io/projected/c55a14c8-0f4e-4283-bc15-94334c567101-kube-api-access-wg2sf\") pod \"c55a14c8-0f4e-4283-bc15-94334c567101\" (UID: \"c55a14c8-0f4e-4283-bc15-94334c567101\") " Feb 17 00:21:28 crc kubenswrapper[5109]: I0217 00:21:28.198148 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c55a14c8-0f4e-4283-bc15-94334c567101-utilities\") pod \"c55a14c8-0f4e-4283-bc15-94334c567101\" (UID: \"c55a14c8-0f4e-4283-bc15-94334c567101\") " Feb 17 00:21:28 crc kubenswrapper[5109]: I0217 00:21:28.198238 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c55a14c8-0f4e-4283-bc15-94334c567101-catalog-content\") pod \"c55a14c8-0f4e-4283-bc15-94334c567101\" (UID: \"c55a14c8-0f4e-4283-bc15-94334c567101\") " Feb 17 00:21:28 crc kubenswrapper[5109]: I0217 00:21:28.199861 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c55a14c8-0f4e-4283-bc15-94334c567101-utilities" (OuterVolumeSpecName: "utilities") pod "c55a14c8-0f4e-4283-bc15-94334c567101" (UID: "c55a14c8-0f4e-4283-bc15-94334c567101"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:21:28 crc kubenswrapper[5109]: I0217 00:21:28.226659 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c55a14c8-0f4e-4283-bc15-94334c567101-kube-api-access-wg2sf" (OuterVolumeSpecName: "kube-api-access-wg2sf") pod "c55a14c8-0f4e-4283-bc15-94334c567101" (UID: "c55a14c8-0f4e-4283-bc15-94334c567101"). InnerVolumeSpecName "kube-api-access-wg2sf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:21:28 crc kubenswrapper[5109]: I0217 00:21:28.299975 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c55a14c8-0f4e-4283-bc15-94334c567101-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:21:28 crc kubenswrapper[5109]: I0217 00:21:28.300019 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wg2sf\" (UniqueName: \"kubernetes.io/projected/c55a14c8-0f4e-4283-bc15-94334c567101-kube-api-access-wg2sf\") on node \"crc\" DevicePath \"\"" Feb 17 00:21:28 crc kubenswrapper[5109]: I0217 00:21:28.319784 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c55a14c8-0f4e-4283-bc15-94334c567101-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c55a14c8-0f4e-4283-bc15-94334c567101" (UID: "c55a14c8-0f4e-4283-bc15-94334c567101"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:21:28 crc kubenswrapper[5109]: I0217 00:21:28.369382 5109 generic.go:358] "Generic (PLEG): container finished" podID="fcceb32f-15f2-442f-a4ff-07f367afd148" containerID="d9d4a39ca97afcf5af32528c7faac2dcabfd24c1d0b7356845fea9822aee2efb" exitCode=0 Feb 17 00:21:28 crc kubenswrapper[5109]: I0217 00:21:28.369649 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bdrw" event={"ID":"fcceb32f-15f2-442f-a4ff-07f367afd148","Type":"ContainerDied","Data":"d9d4a39ca97afcf5af32528c7faac2dcabfd24c1d0b7356845fea9822aee2efb"} Feb 17 00:21:28 crc kubenswrapper[5109]: I0217 00:21:28.371831 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-flxwz" event={"ID":"c55a14c8-0f4e-4283-bc15-94334c567101","Type":"ContainerDied","Data":"c3b7a2501e2d95bf3df72d5fe0eeaca9664205891419a80fe6338bd54c6742a2"} Feb 17 00:21:28 crc kubenswrapper[5109]: I0217 00:21:28.371882 5109 scope.go:117] "RemoveContainer" containerID="8fba753cf13adeb51517d1b89dab98fb1ad437f894a124a9dfa635a9087a70ab" Feb 17 00:21:28 crc kubenswrapper[5109]: I0217 00:21:28.371854 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-flxwz" Feb 17 00:21:28 crc kubenswrapper[5109]: I0217 00:21:28.401118 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c55a14c8-0f4e-4283-bc15-94334c567101-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:21:28 crc kubenswrapper[5109]: I0217 00:21:28.404829 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-flxwz"] Feb 17 00:21:28 crc kubenswrapper[5109]: I0217 00:21:28.412644 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-flxwz"] Feb 17 00:21:29 crc kubenswrapper[5109]: I0217 00:21:29.472235 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c55a14c8-0f4e-4283-bc15-94334c567101" path="/var/lib/kubelet/pods/c55a14c8-0f4e-4283-bc15-94334c567101/volumes" Feb 17 00:21:33 crc kubenswrapper[5109]: I0217 00:21:33.344886 5109 scope.go:117] "RemoveContainer" containerID="09e7c4ed7e5c1df18c06a0807859b1f49762884c0cd22e33f987ca80b0b67dec" Feb 17 00:21:33 crc kubenswrapper[5109]: I0217 00:21:33.372111 5109 scope.go:117] "RemoveContainer" containerID="8cac2e93957a1683eeec43e66d125fbd5c2584dc7a5c1e86131fda887fd657cd" Feb 17 00:21:33 crc kubenswrapper[5109]: I0217 00:21:33.382015 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bdrw" Feb 17 00:21:33 crc kubenswrapper[5109]: I0217 00:21:33.414789 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bdrw" event={"ID":"fcceb32f-15f2-442f-a4ff-07f367afd148","Type":"ContainerDied","Data":"dd32c8859fa58616a4713437f9456749f072609c9e7fd3adeebc04eb4773797d"} Feb 17 00:21:33 crc kubenswrapper[5109]: I0217 00:21:33.414835 5109 scope.go:117] "RemoveContainer" containerID="d9d4a39ca97afcf5af32528c7faac2dcabfd24c1d0b7356845fea9822aee2efb" Feb 17 00:21:33 crc kubenswrapper[5109]: I0217 00:21:33.414934 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bdrw" Feb 17 00:21:33 crc kubenswrapper[5109]: I0217 00:21:33.473314 5109 scope.go:117] "RemoveContainer" containerID="0bad16c05b412ba5b661d049a2cb24bbac6b8a05277210c17f04ccb8bf4f0b76" Feb 17 00:21:33 crc kubenswrapper[5109]: I0217 00:21:33.477416 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcceb32f-15f2-442f-a4ff-07f367afd148-catalog-content\") pod \"fcceb32f-15f2-442f-a4ff-07f367afd148\" (UID: \"fcceb32f-15f2-442f-a4ff-07f367afd148\") " Feb 17 00:21:33 crc kubenswrapper[5109]: I0217 00:21:33.477496 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqpqv\" (UniqueName: \"kubernetes.io/projected/fcceb32f-15f2-442f-a4ff-07f367afd148-kube-api-access-dqpqv\") pod \"fcceb32f-15f2-442f-a4ff-07f367afd148\" (UID: \"fcceb32f-15f2-442f-a4ff-07f367afd148\") " Feb 17 00:21:33 crc kubenswrapper[5109]: I0217 00:21:33.477564 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcceb32f-15f2-442f-a4ff-07f367afd148-utilities\") pod \"fcceb32f-15f2-442f-a4ff-07f367afd148\" (UID: \"fcceb32f-15f2-442f-a4ff-07f367afd148\") " Feb 17 00:21:33 crc kubenswrapper[5109]: I0217 00:21:33.478646 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcceb32f-15f2-442f-a4ff-07f367afd148-utilities" (OuterVolumeSpecName: "utilities") pod "fcceb32f-15f2-442f-a4ff-07f367afd148" (UID: "fcceb32f-15f2-442f-a4ff-07f367afd148"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:21:33 crc kubenswrapper[5109]: I0217 00:21:33.488818 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcceb32f-15f2-442f-a4ff-07f367afd148-kube-api-access-dqpqv" (OuterVolumeSpecName: "kube-api-access-dqpqv") pod "fcceb32f-15f2-442f-a4ff-07f367afd148" (UID: "fcceb32f-15f2-442f-a4ff-07f367afd148"). InnerVolumeSpecName "kube-api-access-dqpqv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:21:33 crc kubenswrapper[5109]: I0217 00:21:33.513799 5109 scope.go:117] "RemoveContainer" containerID="2f05b706eb8510635c0ffe1b1ee99c2820a876e7aaf6db1cb9f77a8b7e0ea015" Feb 17 00:21:33 crc kubenswrapper[5109]: I0217 00:21:33.526350 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcceb32f-15f2-442f-a4ff-07f367afd148-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fcceb32f-15f2-442f-a4ff-07f367afd148" (UID: "fcceb32f-15f2-442f-a4ff-07f367afd148"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:21:33 crc kubenswrapper[5109]: I0217 00:21:33.578497 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcceb32f-15f2-442f-a4ff-07f367afd148-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:21:33 crc kubenswrapper[5109]: I0217 00:21:33.578523 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcceb32f-15f2-442f-a4ff-07f367afd148-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:21:33 crc kubenswrapper[5109]: I0217 00:21:33.578533 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dqpqv\" (UniqueName: \"kubernetes.io/projected/fcceb32f-15f2-442f-a4ff-07f367afd148-kube-api-access-dqpqv\") on node \"crc\" DevicePath \"\"" Feb 17 00:21:33 crc kubenswrapper[5109]: I0217 00:21:33.741433 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4bdrw"] Feb 17 00:21:33 crc kubenswrapper[5109]: I0217 00:21:33.748891 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4bdrw"] Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.421057 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f8ff5bdf8-mgg2t" event={"ID":"642eed45-4b50-4af0-9656-27559798d21c","Type":"ContainerStarted","Data":"7624a238fc44d439d4692c77c97568a9529cdfcf1b5a2281079ee086b28468fb"} Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.425109 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f8ff5bdf8-jwn5l" event={"ID":"6877d358-6ca8-41b1-8eac-69453394b64b","Type":"ContainerStarted","Data":"b3ac5d46271f729286316d97d2277ca0ab7b6375166a3211b31864be0a80c84d"} Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.426957 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-vvp9n" event={"ID":"034835f5-e853-4070-9944-045206b1b990","Type":"ContainerStarted","Data":"88d5384c9d113a85d58aee75957562f8360700af41797fe5acc59d1cd01236c8"} Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.429896 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-bgfhc" event={"ID":"52985de1-df17-4d46-8b16-07bedfc870c0","Type":"ContainerStarted","Data":"4f51e99b38ca223a2a38e516f2ee60bb6722887f81cacdf58b96dfa63cb3e571"} Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.430034 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-669c9f96b5-bgfhc" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.431621 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-gtfv5" event={"ID":"f517e4ec-97f5-4f18-8ea5-f1f3b37dd332","Type":"ContainerStarted","Data":"6dbe78bc1a2061ddf9dcf5d1a915b9f7070850c1e44b8d98dd60fad0c5e7913c"} Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.431867 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-85c68dddb-gtfv5" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.433922 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-567b56b95-744lx" event={"ID":"c14a4db7-e671-4c26-a917-082ebc150072","Type":"ContainerStarted","Data":"94908458406ebd74195ccead68063af36b09ef290964dfeffd20f197f5326b47"} Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.434393 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-85c68dddb-gtfv5" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.442403 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f8ff5bdf8-mgg2t" podStartSLOduration=5.23497434 podStartE2EDuration="16.442382736s" podCreationTimestamp="2026-02-17 00:21:18 +0000 UTC" firstStartedPulling="2026-02-17 00:21:22.16460959 +0000 UTC m=+753.496164348" lastFinishedPulling="2026-02-17 00:21:33.372017986 +0000 UTC m=+764.703572744" observedRunningTime="2026-02-17 00:21:34.438694512 +0000 UTC m=+765.770249270" watchObservedRunningTime="2026-02-17 00:21:34.442382736 +0000 UTC m=+765.773937514" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.477608 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-85c68dddb-gtfv5" podStartSLOduration=5.186806311 podStartE2EDuration="16.477576644s" podCreationTimestamp="2026-02-17 00:21:18 +0000 UTC" firstStartedPulling="2026-02-17 00:21:22.095322992 +0000 UTC m=+753.426877750" lastFinishedPulling="2026-02-17 00:21:33.386093295 +0000 UTC m=+764.717648083" observedRunningTime="2026-02-17 00:21:34.474164557 +0000 UTC m=+765.805719315" watchObservedRunningTime="2026-02-17 00:21:34.477576644 +0000 UTC m=+765.809131402" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.502424 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-669c9f96b5-bgfhc" podStartSLOduration=4.463912392 podStartE2EDuration="15.502406788s" podCreationTimestamp="2026-02-17 00:21:19 +0000 UTC" firstStartedPulling="2026-02-17 00:21:22.403529166 +0000 UTC m=+753.735083924" lastFinishedPulling="2026-02-17 00:21:33.442023562 +0000 UTC m=+764.773578320" observedRunningTime="2026-02-17 00:21:34.498622101 +0000 UTC m=+765.830176869" watchObservedRunningTime="2026-02-17 00:21:34.502406788 +0000 UTC m=+765.833961546" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.524403 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-567b56b95-744lx" podStartSLOduration=2.2662603150000002 podStartE2EDuration="12.524384689s" podCreationTimestamp="2026-02-17 00:21:22 +0000 UTC" firstStartedPulling="2026-02-17 00:21:23.001068843 +0000 UTC m=+754.332623591" lastFinishedPulling="2026-02-17 00:21:33.259193207 +0000 UTC m=+764.590747965" observedRunningTime="2026-02-17 00:21:34.520476939 +0000 UTC m=+765.852031697" watchObservedRunningTime="2026-02-17 00:21:34.524384689 +0000 UTC m=+765.855939447" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.585251 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-vvp9n" podStartSLOduration=5.606643854 podStartE2EDuration="16.585233371s" podCreationTimestamp="2026-02-17 00:21:18 +0000 UTC" firstStartedPulling="2026-02-17 00:21:22.402139411 +0000 UTC m=+753.733694169" lastFinishedPulling="2026-02-17 00:21:33.380728928 +0000 UTC m=+764.712283686" observedRunningTime="2026-02-17 00:21:34.57969478 +0000 UTC m=+765.911249548" watchObservedRunningTime="2026-02-17 00:21:34.585233371 +0000 UTC m=+765.916788139" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.606127 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-r2695"] Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.606689 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c55a14c8-0f4e-4283-bc15-94334c567101" containerName="extract-content" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.606708 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="c55a14c8-0f4e-4283-bc15-94334c567101" containerName="extract-content" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.606722 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="049a8588-d5af-44ed-a85c-77a01850a79d" containerName="pull" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.606732 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="049a8588-d5af-44ed-a85c-77a01850a79d" containerName="pull" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.606744 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c55a14c8-0f4e-4283-bc15-94334c567101" containerName="registry-server" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.606749 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="c55a14c8-0f4e-4283-bc15-94334c567101" containerName="registry-server" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.606761 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="049a8588-d5af-44ed-a85c-77a01850a79d" containerName="util" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.606767 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="049a8588-d5af-44ed-a85c-77a01850a79d" containerName="util" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.606776 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="049a8588-d5af-44ed-a85c-77a01850a79d" containerName="extract" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.606784 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="049a8588-d5af-44ed-a85c-77a01850a79d" containerName="extract" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.606820 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fcceb32f-15f2-442f-a4ff-07f367afd148" containerName="extract-content" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.606829 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcceb32f-15f2-442f-a4ff-07f367afd148" containerName="extract-content" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.606840 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fcceb32f-15f2-442f-a4ff-07f367afd148" containerName="registry-server" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.606848 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcceb32f-15f2-442f-a4ff-07f367afd148" containerName="registry-server" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.606859 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c55a14c8-0f4e-4283-bc15-94334c567101" containerName="extract-utilities" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.606866 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="c55a14c8-0f4e-4283-bc15-94334c567101" containerName="extract-utilities" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.606888 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fcceb32f-15f2-442f-a4ff-07f367afd148" containerName="extract-utilities" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.606895 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcceb32f-15f2-442f-a4ff-07f367afd148" containerName="extract-utilities" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.606994 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="fcceb32f-15f2-442f-a4ff-07f367afd148" containerName="registry-server" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.607004 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="049a8588-d5af-44ed-a85c-77a01850a79d" containerName="extract" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.607020 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="c55a14c8-0f4e-4283-bc15-94334c567101" containerName="registry-server" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.610314 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-r2695" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.613745 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-xbrgn\"" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.613873 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.613959 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.627100 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f8ff5bdf8-jwn5l" podStartSLOduration=5.249125912 podStartE2EDuration="16.627084529s" podCreationTimestamp="2026-02-17 00:21:18 +0000 UTC" firstStartedPulling="2026-02-17 00:21:21.995846374 +0000 UTC m=+753.327401132" lastFinishedPulling="2026-02-17 00:21:33.373804991 +0000 UTC m=+764.705359749" observedRunningTime="2026-02-17 00:21:34.620717777 +0000 UTC m=+765.952272535" watchObservedRunningTime="2026-02-17 00:21:34.627084529 +0000 UTC m=+765.958639287" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.631523 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-r2695"] Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.694013 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c6vf\" (UniqueName: \"kubernetes.io/projected/5b320f13-6574-43cf-a5f2-21a7084223d3-kube-api-access-9c6vf\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-r2695\" (UID: \"5b320f13-6574-43cf-a5f2-21a7084223d3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-r2695" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.694147 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5b320f13-6574-43cf-a5f2-21a7084223d3-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-r2695\" (UID: \"5b320f13-6574-43cf-a5f2-21a7084223d3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-r2695" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.795618 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5b320f13-6574-43cf-a5f2-21a7084223d3-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-r2695\" (UID: \"5b320f13-6574-43cf-a5f2-21a7084223d3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-r2695" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.795679 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9c6vf\" (UniqueName: \"kubernetes.io/projected/5b320f13-6574-43cf-a5f2-21a7084223d3-kube-api-access-9c6vf\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-r2695\" (UID: \"5b320f13-6574-43cf-a5f2-21a7084223d3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-r2695" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.796248 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5b320f13-6574-43cf-a5f2-21a7084223d3-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-r2695\" (UID: \"5b320f13-6574-43cf-a5f2-21a7084223d3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-r2695" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.822425 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c6vf\" (UniqueName: \"kubernetes.io/projected/5b320f13-6574-43cf-a5f2-21a7084223d3-kube-api-access-9c6vf\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-r2695\" (UID: \"5b320f13-6574-43cf-a5f2-21a7084223d3\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-r2695" Feb 17 00:21:34 crc kubenswrapper[5109]: I0217 00:21:34.922901 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-r2695" Feb 17 00:21:35 crc kubenswrapper[5109]: I0217 00:21:35.174870 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-r2695"] Feb 17 00:21:35 crc kubenswrapper[5109]: I0217 00:21:35.440566 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-r2695" event={"ID":"5b320f13-6574-43cf-a5f2-21a7084223d3","Type":"ContainerStarted","Data":"d9cf1960b8e0235f3ff387d14e02fe438fe63b0a3413cd012b399f56c2c5d9b1"} Feb 17 00:21:35 crc kubenswrapper[5109]: I0217 00:21:35.471811 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcceb32f-15f2-442f-a4ff-07f367afd148" path="/var/lib/kubelet/pods/fcceb32f-15f2-442f-a4ff-07f367afd148/volumes" Feb 17 00:21:36 crc kubenswrapper[5109]: I0217 00:21:36.781687 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 17 00:21:36 crc kubenswrapper[5109]: I0217 00:21:36.786446 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:36 crc kubenswrapper[5109]: I0217 00:21:36.789219 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Feb 17 00:21:36 crc kubenswrapper[5109]: I0217 00:21:36.789244 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Feb 17 00:21:36 crc kubenswrapper[5109]: I0217 00:21:36.790681 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Feb 17 00:21:36 crc kubenswrapper[5109]: I0217 00:21:36.790833 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Feb 17 00:21:36 crc kubenswrapper[5109]: I0217 00:21:36.790851 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Feb 17 00:21:36 crc kubenswrapper[5109]: I0217 00:21:36.790959 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Feb 17 00:21:36 crc kubenswrapper[5109]: I0217 00:21:36.791417 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-4fxqj\"" Feb 17 00:21:36 crc kubenswrapper[5109]: I0217 00:21:36.791618 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Feb 17 00:21:36 crc kubenswrapper[5109]: I0217 00:21:36.791856 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Feb 17 00:21:36 crc kubenswrapper[5109]: I0217 00:21:36.795689 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 17 00:21:36 crc kubenswrapper[5109]: I0217 00:21:36.919612 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:36 crc kubenswrapper[5109]: I0217 00:21:36.919936 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/dc9f509d-8e56-4537-b20e-a52f477f336f-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:36 crc kubenswrapper[5109]: I0217 00:21:36.920053 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:36 crc kubenswrapper[5109]: I0217 00:21:36.920133 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:36 crc kubenswrapper[5109]: I0217 00:21:36.920210 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:36 crc kubenswrapper[5109]: I0217 00:21:36.920287 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:36 crc kubenswrapper[5109]: I0217 00:21:36.920352 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/dc9f509d-8e56-4537-b20e-a52f477f336f-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:36 crc kubenswrapper[5109]: I0217 00:21:36.920428 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:36 crc kubenswrapper[5109]: I0217 00:21:36.920503 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:36 crc kubenswrapper[5109]: I0217 00:21:36.920568 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:36 crc kubenswrapper[5109]: I0217 00:21:36.920658 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/dc9f509d-8e56-4537-b20e-a52f477f336f-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:36 crc kubenswrapper[5109]: I0217 00:21:36.920757 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/dc9f509d-8e56-4537-b20e-a52f477f336f-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:36 crc kubenswrapper[5109]: I0217 00:21:36.920839 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:36 crc kubenswrapper[5109]: I0217 00:21:36.920943 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:36 crc kubenswrapper[5109]: I0217 00:21:36.921038 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:37 crc kubenswrapper[5109]: I0217 00:21:37.022478 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:37 crc kubenswrapper[5109]: I0217 00:21:37.022626 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:37 crc kubenswrapper[5109]: I0217 00:21:37.022653 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/dc9f509d-8e56-4537-b20e-a52f477f336f-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:37 crc kubenswrapper[5109]: I0217 00:21:37.022703 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:37 crc kubenswrapper[5109]: I0217 00:21:37.022831 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:37 crc kubenswrapper[5109]: I0217 00:21:37.022856 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:37 crc kubenswrapper[5109]: I0217 00:21:37.022888 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:37 crc kubenswrapper[5109]: I0217 00:21:37.022906 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/dc9f509d-8e56-4537-b20e-a52f477f336f-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:37 crc kubenswrapper[5109]: I0217 00:21:37.022924 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:37 crc kubenswrapper[5109]: I0217 00:21:37.023262 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:37 crc kubenswrapper[5109]: I0217 00:21:37.023402 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/dc9f509d-8e56-4537-b20e-a52f477f336f-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:37 crc kubenswrapper[5109]: I0217 00:21:37.023451 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:37 crc kubenswrapper[5109]: I0217 00:21:37.023487 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/dc9f509d-8e56-4537-b20e-a52f477f336f-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:37 crc kubenswrapper[5109]: I0217 00:21:37.023502 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/dc9f509d-8e56-4537-b20e-a52f477f336f-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:37 crc kubenswrapper[5109]: I0217 00:21:37.023524 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:37 crc kubenswrapper[5109]: I0217 00:21:37.023571 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:37 crc kubenswrapper[5109]: I0217 00:21:37.023662 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:37 crc kubenswrapper[5109]: I0217 00:21:37.023700 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:37 crc kubenswrapper[5109]: I0217 00:21:37.024070 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:37 crc kubenswrapper[5109]: I0217 00:21:37.024579 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/dc9f509d-8e56-4537-b20e-a52f477f336f-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:37 crc kubenswrapper[5109]: I0217 00:21:37.024665 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:37 crc kubenswrapper[5109]: I0217 00:21:37.025133 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/dc9f509d-8e56-4537-b20e-a52f477f336f-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:37 crc kubenswrapper[5109]: I0217 00:21:37.025188 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:37 crc kubenswrapper[5109]: I0217 00:21:37.029358 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:37 crc kubenswrapper[5109]: I0217 00:21:37.029793 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:37 crc kubenswrapper[5109]: I0217 00:21:37.032185 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/dc9f509d-8e56-4537-b20e-a52f477f336f-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:37 crc kubenswrapper[5109]: I0217 00:21:37.032313 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:37 crc kubenswrapper[5109]: I0217 00:21:37.045590 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:37 crc kubenswrapper[5109]: I0217 00:21:37.048457 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:37 crc kubenswrapper[5109]: I0217 00:21:37.056234 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/dc9f509d-8e56-4537-b20e-a52f477f336f-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"dc9f509d-8e56-4537-b20e-a52f477f336f\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:37 crc kubenswrapper[5109]: I0217 00:21:37.104370 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:21:37 crc kubenswrapper[5109]: I0217 00:21:37.664737 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 17 00:21:38 crc kubenswrapper[5109]: I0217 00:21:38.721890 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rvx97"] Feb 17 00:21:38 crc kubenswrapper[5109]: I0217 00:21:38.730224 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rvx97" Feb 17 00:21:38 crc kubenswrapper[5109]: I0217 00:21:38.740809 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rvx97"] Feb 17 00:21:38 crc kubenswrapper[5109]: I0217 00:21:38.889014 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pprqp\" (UniqueName: \"kubernetes.io/projected/450c297a-058e-408b-8625-ede5977ddfb1-kube-api-access-pprqp\") pod \"community-operators-rvx97\" (UID: \"450c297a-058e-408b-8625-ede5977ddfb1\") " pod="openshift-marketplace/community-operators-rvx97" Feb 17 00:21:38 crc kubenswrapper[5109]: I0217 00:21:38.889190 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/450c297a-058e-408b-8625-ede5977ddfb1-utilities\") pod \"community-operators-rvx97\" (UID: \"450c297a-058e-408b-8625-ede5977ddfb1\") " pod="openshift-marketplace/community-operators-rvx97" Feb 17 00:21:38 crc kubenswrapper[5109]: I0217 00:21:38.889252 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/450c297a-058e-408b-8625-ede5977ddfb1-catalog-content\") pod \"community-operators-rvx97\" (UID: \"450c297a-058e-408b-8625-ede5977ddfb1\") " pod="openshift-marketplace/community-operators-rvx97" Feb 17 00:21:38 crc kubenswrapper[5109]: W0217 00:21:38.944265 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc9f509d_8e56_4537_b20e_a52f477f336f.slice/crio-3f0a8d3799cd638ab6edfbaf40ef8502cfc7c740f3ef5343737f0103d8965286 WatchSource:0}: Error finding container 3f0a8d3799cd638ab6edfbaf40ef8502cfc7c740f3ef5343737f0103d8965286: Status 404 returned error can't find the container with id 3f0a8d3799cd638ab6edfbaf40ef8502cfc7c740f3ef5343737f0103d8965286 Feb 17 00:21:38 crc kubenswrapper[5109]: I0217 00:21:38.990517 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/450c297a-058e-408b-8625-ede5977ddfb1-utilities\") pod \"community-operators-rvx97\" (UID: \"450c297a-058e-408b-8625-ede5977ddfb1\") " pod="openshift-marketplace/community-operators-rvx97" Feb 17 00:21:38 crc kubenswrapper[5109]: I0217 00:21:38.990564 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/450c297a-058e-408b-8625-ede5977ddfb1-catalog-content\") pod \"community-operators-rvx97\" (UID: \"450c297a-058e-408b-8625-ede5977ddfb1\") " pod="openshift-marketplace/community-operators-rvx97" Feb 17 00:21:38 crc kubenswrapper[5109]: I0217 00:21:38.990620 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pprqp\" (UniqueName: \"kubernetes.io/projected/450c297a-058e-408b-8625-ede5977ddfb1-kube-api-access-pprqp\") pod \"community-operators-rvx97\" (UID: \"450c297a-058e-408b-8625-ede5977ddfb1\") " pod="openshift-marketplace/community-operators-rvx97" Feb 17 00:21:38 crc kubenswrapper[5109]: I0217 00:21:38.991107 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/450c297a-058e-408b-8625-ede5977ddfb1-utilities\") pod \"community-operators-rvx97\" (UID: \"450c297a-058e-408b-8625-ede5977ddfb1\") " pod="openshift-marketplace/community-operators-rvx97" Feb 17 00:21:38 crc kubenswrapper[5109]: I0217 00:21:38.991157 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/450c297a-058e-408b-8625-ede5977ddfb1-catalog-content\") pod \"community-operators-rvx97\" (UID: \"450c297a-058e-408b-8625-ede5977ddfb1\") " pod="openshift-marketplace/community-operators-rvx97" Feb 17 00:21:39 crc kubenswrapper[5109]: I0217 00:21:39.014389 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pprqp\" (UniqueName: \"kubernetes.io/projected/450c297a-058e-408b-8625-ede5977ddfb1-kube-api-access-pprqp\") pod \"community-operators-rvx97\" (UID: \"450c297a-058e-408b-8625-ede5977ddfb1\") " pod="openshift-marketplace/community-operators-rvx97" Feb 17 00:21:39 crc kubenswrapper[5109]: I0217 00:21:39.099953 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rvx97" Feb 17 00:21:39 crc kubenswrapper[5109]: I0217 00:21:39.500285 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"dc9f509d-8e56-4537-b20e-a52f477f336f","Type":"ContainerStarted","Data":"3f0a8d3799cd638ab6edfbaf40ef8502cfc7c740f3ef5343737f0103d8965286"} Feb 17 00:21:39 crc kubenswrapper[5109]: I0217 00:21:39.509744 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-r2695" event={"ID":"5b320f13-6574-43cf-a5f2-21a7084223d3","Type":"ContainerStarted","Data":"203efb6194d3ff6f1ff8c6f119fa14b21b1dc750e1161ec0f1e6b95d7d331022"} Feb 17 00:21:39 crc kubenswrapper[5109]: I0217 00:21:39.579655 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-r2695" podStartSLOduration=1.746600633 podStartE2EDuration="5.579635012s" podCreationTimestamp="2026-02-17 00:21:34 +0000 UTC" firstStartedPulling="2026-02-17 00:21:35.183410924 +0000 UTC m=+766.514965682" lastFinishedPulling="2026-02-17 00:21:39.016445303 +0000 UTC m=+770.348000061" observedRunningTime="2026-02-17 00:21:39.577800265 +0000 UTC m=+770.909355023" watchObservedRunningTime="2026-02-17 00:21:39.579635012 +0000 UTC m=+770.911189770" Feb 17 00:21:39 crc kubenswrapper[5109]: I0217 00:21:39.604468 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rvx97"] Feb 17 00:21:40 crc kubenswrapper[5109]: I0217 00:21:40.522904 5109 generic.go:358] "Generic (PLEG): container finished" podID="450c297a-058e-408b-8625-ede5977ddfb1" containerID="c06c5071d0d3dfdcb9ba59e11980627b1e4b4cafbb7f8148b698f8fc779010ee" exitCode=0 Feb 17 00:21:40 crc kubenswrapper[5109]: I0217 00:21:40.524363 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rvx97" event={"ID":"450c297a-058e-408b-8625-ede5977ddfb1","Type":"ContainerDied","Data":"c06c5071d0d3dfdcb9ba59e11980627b1e4b4cafbb7f8148b698f8fc779010ee"} Feb 17 00:21:40 crc kubenswrapper[5109]: I0217 00:21:40.524393 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rvx97" event={"ID":"450c297a-058e-408b-8625-ede5977ddfb1","Type":"ContainerStarted","Data":"9052e99c6d940f358f2cd49aa16540da708492de1920b4b44f9763bbe43a08a6"} Feb 17 00:21:43 crc kubenswrapper[5109]: I0217 00:21:43.343992 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-f5pbf"] Feb 17 00:21:43 crc kubenswrapper[5109]: I0217 00:21:43.349436 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-f5pbf" Feb 17 00:21:43 crc kubenswrapper[5109]: I0217 00:21:43.352025 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-fvgfm\"" Feb 17 00:21:43 crc kubenswrapper[5109]: I0217 00:21:43.353077 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Feb 17 00:21:43 crc kubenswrapper[5109]: I0217 00:21:43.353177 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-f5pbf"] Feb 17 00:21:43 crc kubenswrapper[5109]: I0217 00:21:43.358434 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Feb 17 00:21:43 crc kubenswrapper[5109]: I0217 00:21:43.372151 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dca37b2b-eb0d-4835-beb6-a53dde67a2f8-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-f5pbf\" (UID: \"dca37b2b-eb0d-4835-beb6-a53dde67a2f8\") " pod="cert-manager/cert-manager-webhook-597b96b99b-f5pbf" Feb 17 00:21:43 crc kubenswrapper[5109]: I0217 00:21:43.372190 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh89w\" (UniqueName: \"kubernetes.io/projected/dca37b2b-eb0d-4835-beb6-a53dde67a2f8-kube-api-access-kh89w\") pod \"cert-manager-webhook-597b96b99b-f5pbf\" (UID: \"dca37b2b-eb0d-4835-beb6-a53dde67a2f8\") " pod="cert-manager/cert-manager-webhook-597b96b99b-f5pbf" Feb 17 00:21:43 crc kubenswrapper[5109]: I0217 00:21:43.479101 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dca37b2b-eb0d-4835-beb6-a53dde67a2f8-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-f5pbf\" (UID: \"dca37b2b-eb0d-4835-beb6-a53dde67a2f8\") " pod="cert-manager/cert-manager-webhook-597b96b99b-f5pbf" Feb 17 00:21:43 crc kubenswrapper[5109]: I0217 00:21:43.479159 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kh89w\" (UniqueName: \"kubernetes.io/projected/dca37b2b-eb0d-4835-beb6-a53dde67a2f8-kube-api-access-kh89w\") pod \"cert-manager-webhook-597b96b99b-f5pbf\" (UID: \"dca37b2b-eb0d-4835-beb6-a53dde67a2f8\") " pod="cert-manager/cert-manager-webhook-597b96b99b-f5pbf" Feb 17 00:21:43 crc kubenswrapper[5109]: I0217 00:21:43.500238 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh89w\" (UniqueName: \"kubernetes.io/projected/dca37b2b-eb0d-4835-beb6-a53dde67a2f8-kube-api-access-kh89w\") pod \"cert-manager-webhook-597b96b99b-f5pbf\" (UID: \"dca37b2b-eb0d-4835-beb6-a53dde67a2f8\") " pod="cert-manager/cert-manager-webhook-597b96b99b-f5pbf" Feb 17 00:21:43 crc kubenswrapper[5109]: I0217 00:21:43.517114 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dca37b2b-eb0d-4835-beb6-a53dde67a2f8-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-f5pbf\" (UID: \"dca37b2b-eb0d-4835-beb6-a53dde67a2f8\") " pod="cert-manager/cert-manager-webhook-597b96b99b-f5pbf" Feb 17 00:21:43 crc kubenswrapper[5109]: I0217 00:21:43.676922 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-f5pbf" Feb 17 00:21:43 crc kubenswrapper[5109]: I0217 00:21:43.992817 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-f5pbf"] Feb 17 00:21:45 crc kubenswrapper[5109]: I0217 00:21:45.443095 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-669c9f96b5-bgfhc" Feb 17 00:21:45 crc kubenswrapper[5109]: I0217 00:21:45.896860 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-smnww"] Feb 17 00:21:45 crc kubenswrapper[5109]: I0217 00:21:45.913809 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-smnww" Feb 17 00:21:45 crc kubenswrapper[5109]: I0217 00:21:45.916414 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-xzbf5\"" Feb 17 00:21:45 crc kubenswrapper[5109]: I0217 00:21:45.917839 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-smnww"] Feb 17 00:21:46 crc kubenswrapper[5109]: I0217 00:21:46.017134 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8qzq\" (UniqueName: \"kubernetes.io/projected/614ff088-aa03-4a19-bdd0-00cecbe79da4-kube-api-access-n8qzq\") pod \"cert-manager-cainjector-8966b78d4-smnww\" (UID: \"614ff088-aa03-4a19-bdd0-00cecbe79da4\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-smnww" Feb 17 00:21:46 crc kubenswrapper[5109]: I0217 00:21:46.017184 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/614ff088-aa03-4a19-bdd0-00cecbe79da4-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-smnww\" (UID: \"614ff088-aa03-4a19-bdd0-00cecbe79da4\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-smnww" Feb 17 00:21:46 crc kubenswrapper[5109]: I0217 00:21:46.118856 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n8qzq\" (UniqueName: \"kubernetes.io/projected/614ff088-aa03-4a19-bdd0-00cecbe79da4-kube-api-access-n8qzq\") pod \"cert-manager-cainjector-8966b78d4-smnww\" (UID: \"614ff088-aa03-4a19-bdd0-00cecbe79da4\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-smnww" Feb 17 00:21:46 crc kubenswrapper[5109]: I0217 00:21:46.118906 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/614ff088-aa03-4a19-bdd0-00cecbe79da4-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-smnww\" (UID: \"614ff088-aa03-4a19-bdd0-00cecbe79da4\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-smnww" Feb 17 00:21:46 crc kubenswrapper[5109]: I0217 00:21:46.150474 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8qzq\" (UniqueName: \"kubernetes.io/projected/614ff088-aa03-4a19-bdd0-00cecbe79da4-kube-api-access-n8qzq\") pod \"cert-manager-cainjector-8966b78d4-smnww\" (UID: \"614ff088-aa03-4a19-bdd0-00cecbe79da4\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-smnww" Feb 17 00:21:46 crc kubenswrapper[5109]: I0217 00:21:46.160683 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/614ff088-aa03-4a19-bdd0-00cecbe79da4-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-smnww\" (UID: \"614ff088-aa03-4a19-bdd0-00cecbe79da4\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-smnww" Feb 17 00:21:46 crc kubenswrapper[5109]: I0217 00:21:46.233499 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-smnww" Feb 17 00:21:47 crc kubenswrapper[5109]: I0217 00:21:47.275031 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-smnww"] Feb 17 00:21:47 crc kubenswrapper[5109]: I0217 00:21:47.580523 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-f5pbf" event={"ID":"dca37b2b-eb0d-4835-beb6-a53dde67a2f8","Type":"ContainerStarted","Data":"23a76ffc302d9c5d328312913a047878d6c11fb5f3307386251c43a5a98748fa"} Feb 17 00:21:47 crc kubenswrapper[5109]: I0217 00:21:47.582255 5109 generic.go:358] "Generic (PLEG): container finished" podID="450c297a-058e-408b-8625-ede5977ddfb1" containerID="6f0a6336ef923099149942abcda8be785cd2bf46c25b6d1d7ec4c9c982c27bcf" exitCode=0 Feb 17 00:21:47 crc kubenswrapper[5109]: I0217 00:21:47.582294 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rvx97" event={"ID":"450c297a-058e-408b-8625-ede5977ddfb1","Type":"ContainerDied","Data":"6f0a6336ef923099149942abcda8be785cd2bf46c25b6d1d7ec4c9c982c27bcf"} Feb 17 00:21:51 crc kubenswrapper[5109]: I0217 00:21:51.211755 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Feb 17 00:21:51 crc kubenswrapper[5109]: I0217 00:21:51.254184 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Feb 17 00:21:51 crc kubenswrapper[5109]: I0217 00:21:51.254286 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 17 00:21:51 crc kubenswrapper[5109]: I0217 00:21:51.256398 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-catalog-configmap-partition-1\"" Feb 17 00:21:51 crc kubenswrapper[5109]: I0217 00:21:51.338226 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrdnq\" (UniqueName: \"kubernetes.io/projected/f1f32a91-c284-4c23-8a2d-5fe6626b83e5-kube-api-access-lrdnq\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"f1f32a91-c284-4c23-8a2d-5fe6626b83e5\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 17 00:21:51 crc kubenswrapper[5109]: I0217 00:21:51.338275 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/f1f32a91-c284-4c23-8a2d-5fe6626b83e5-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"f1f32a91-c284-4c23-8a2d-5fe6626b83e5\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 17 00:21:51 crc kubenswrapper[5109]: I0217 00:21:51.338411 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/f1f32a91-c284-4c23-8a2d-5fe6626b83e5-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"f1f32a91-c284-4c23-8a2d-5fe6626b83e5\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 17 00:21:51 crc kubenswrapper[5109]: W0217 00:21:51.373909 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod614ff088_aa03_4a19_bdd0_00cecbe79da4.slice/crio-f3d463f2d5b6e4a4ce03a22d5736fd80ccf6179e55f4988ac3a4133572f6add1 WatchSource:0}: Error finding container f3d463f2d5b6e4a4ce03a22d5736fd80ccf6179e55f4988ac3a4133572f6add1: Status 404 returned error can't find the container with id f3d463f2d5b6e4a4ce03a22d5736fd80ccf6179e55f4988ac3a4133572f6add1 Feb 17 00:21:51 crc kubenswrapper[5109]: I0217 00:21:51.439845 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lrdnq\" (UniqueName: \"kubernetes.io/projected/f1f32a91-c284-4c23-8a2d-5fe6626b83e5-kube-api-access-lrdnq\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"f1f32a91-c284-4c23-8a2d-5fe6626b83e5\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 17 00:21:51 crc kubenswrapper[5109]: I0217 00:21:51.439906 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/f1f32a91-c284-4c23-8a2d-5fe6626b83e5-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"f1f32a91-c284-4c23-8a2d-5fe6626b83e5\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 17 00:21:51 crc kubenswrapper[5109]: I0217 00:21:51.439973 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/f1f32a91-c284-4c23-8a2d-5fe6626b83e5-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"f1f32a91-c284-4c23-8a2d-5fe6626b83e5\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 17 00:21:51 crc kubenswrapper[5109]: I0217 00:21:51.440371 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/f1f32a91-c284-4c23-8a2d-5fe6626b83e5-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"f1f32a91-c284-4c23-8a2d-5fe6626b83e5\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 17 00:21:51 crc kubenswrapper[5109]: I0217 00:21:51.440853 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/f1f32a91-c284-4c23-8a2d-5fe6626b83e5-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"f1f32a91-c284-4c23-8a2d-5fe6626b83e5\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 17 00:21:51 crc kubenswrapper[5109]: I0217 00:21:51.459071 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrdnq\" (UniqueName: \"kubernetes.io/projected/f1f32a91-c284-4c23-8a2d-5fe6626b83e5-kube-api-access-lrdnq\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"f1f32a91-c284-4c23-8a2d-5fe6626b83e5\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 17 00:21:51 crc kubenswrapper[5109]: I0217 00:21:51.573580 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 17 00:21:51 crc kubenswrapper[5109]: I0217 00:21:51.610612 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-smnww" event={"ID":"614ff088-aa03-4a19-bdd0-00cecbe79da4","Type":"ContainerStarted","Data":"f3d463f2d5b6e4a4ce03a22d5736fd80ccf6179e55f4988ac3a4133572f6add1"} Feb 17 00:21:52 crc kubenswrapper[5109]: I0217 00:21:52.124953 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Feb 17 00:21:52 crc kubenswrapper[5109]: I0217 00:21:52.623362 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"f1f32a91-c284-4c23-8a2d-5fe6626b83e5","Type":"ContainerStarted","Data":"29db4acb5c7d3c65e85c69c79200a7deea9f22db956db4a55e8a24fe0f70b278"} Feb 17 00:21:52 crc kubenswrapper[5109]: I0217 00:21:52.629216 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rvx97" event={"ID":"450c297a-058e-408b-8625-ede5977ddfb1","Type":"ContainerStarted","Data":"dea179e57ad340e430efdf717d3741b7171d9bfd0f8f8980f3baf016c90c85f8"} Feb 17 00:21:52 crc kubenswrapper[5109]: I0217 00:21:52.634013 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"dc9f509d-8e56-4537-b20e-a52f477f336f","Type":"ContainerStarted","Data":"ff1fd32d858d0d191e4a8c11b71c38593e8fe386d1b9567af0d7f5f3922fd652"} Feb 17 00:21:52 crc kubenswrapper[5109]: I0217 00:21:52.653202 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rvx97" podStartSLOduration=8.344137053 podStartE2EDuration="14.653182397s" podCreationTimestamp="2026-02-17 00:21:38 +0000 UTC" firstStartedPulling="2026-02-17 00:21:40.524248044 +0000 UTC m=+771.855802802" lastFinishedPulling="2026-02-17 00:21:46.833293388 +0000 UTC m=+778.164848146" observedRunningTime="2026-02-17 00:21:52.647028013 +0000 UTC m=+783.978582781" watchObservedRunningTime="2026-02-17 00:21:52.653182397 +0000 UTC m=+783.984737165" Feb 17 00:21:52 crc kubenswrapper[5109]: I0217 00:21:52.841233 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 17 00:21:52 crc kubenswrapper[5109]: I0217 00:21:52.871824 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 17 00:21:54 crc kubenswrapper[5109]: I0217 00:21:54.646907 5109 generic.go:358] "Generic (PLEG): container finished" podID="dc9f509d-8e56-4537-b20e-a52f477f336f" containerID="ff1fd32d858d0d191e4a8c11b71c38593e8fe386d1b9567af0d7f5f3922fd652" exitCode=0 Feb 17 00:21:54 crc kubenswrapper[5109]: I0217 00:21:54.646994 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"dc9f509d-8e56-4537-b20e-a52f477f336f","Type":"ContainerDied","Data":"ff1fd32d858d0d191e4a8c11b71c38593e8fe386d1b9567af0d7f5f3922fd652"} Feb 17 00:21:59 crc kubenswrapper[5109]: I0217 00:21:59.102784 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-rvx97" Feb 17 00:21:59 crc kubenswrapper[5109]: I0217 00:21:59.102858 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rvx97" Feb 17 00:21:59 crc kubenswrapper[5109]: I0217 00:21:59.142479 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rvx97" Feb 17 00:21:59 crc kubenswrapper[5109]: I0217 00:21:59.727358 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rvx97" Feb 17 00:21:59 crc kubenswrapper[5109]: I0217 00:21:59.950653 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rvx97"] Feb 17 00:22:00 crc kubenswrapper[5109]: I0217 00:22:00.113720 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p9bnr"] Feb 17 00:22:00 crc kubenswrapper[5109]: I0217 00:22:00.114810 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p9bnr" podUID="19039840-56d5-49c2-b68c-ec50dc56a282" containerName="registry-server" containerID="cri-o://22c960e67f4f028ae6319073b8299269a7c6878b282de0f150d6393a6e851c06" gracePeriod=2 Feb 17 00:22:00 crc kubenswrapper[5109]: I0217 00:22:00.125703 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29521462-stpx6"] Feb 17 00:22:01 crc kubenswrapper[5109]: E0217 00:22:01.205522 5109 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 22c960e67f4f028ae6319073b8299269a7c6878b282de0f150d6393a6e851c06 is running failed: container process not found" containerID="22c960e67f4f028ae6319073b8299269a7c6878b282de0f150d6393a6e851c06" cmd=["grpc_health_probe","-addr=:50051"] Feb 17 00:22:01 crc kubenswrapper[5109]: E0217 00:22:01.206137 5109 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 22c960e67f4f028ae6319073b8299269a7c6878b282de0f150d6393a6e851c06 is running failed: container process not found" containerID="22c960e67f4f028ae6319073b8299269a7c6878b282de0f150d6393a6e851c06" cmd=["grpc_health_probe","-addr=:50051"] Feb 17 00:22:01 crc kubenswrapper[5109]: E0217 00:22:01.206550 5109 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 22c960e67f4f028ae6319073b8299269a7c6878b282de0f150d6393a6e851c06 is running failed: container process not found" containerID="22c960e67f4f028ae6319073b8299269a7c6878b282de0f150d6393a6e851c06" cmd=["grpc_health_probe","-addr=:50051"] Feb 17 00:22:01 crc kubenswrapper[5109]: E0217 00:22:01.206587 5109 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 22c960e67f4f028ae6319073b8299269a7c6878b282de0f150d6393a6e851c06 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-p9bnr" podUID="19039840-56d5-49c2-b68c-ec50dc56a282" containerName="registry-server" probeResult="unknown" Feb 17 00:22:01 crc kubenswrapper[5109]: I0217 00:22:01.349746 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29521462-stpx6"] Feb 17 00:22:01 crc kubenswrapper[5109]: I0217 00:22:01.349859 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521462-stpx6" Feb 17 00:22:01 crc kubenswrapper[5109]: I0217 00:22:01.353734 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-r4lwp\"" Feb 17 00:22:01 crc kubenswrapper[5109]: I0217 00:22:01.354578 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 17 00:22:01 crc kubenswrapper[5109]: I0217 00:22:01.363473 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 17 00:22:01 crc kubenswrapper[5109]: I0217 00:22:01.366949 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-759f64656b-h8g6s"] Feb 17 00:22:01 crc kubenswrapper[5109]: I0217 00:22:01.487307 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvct9\" (UniqueName: \"kubernetes.io/projected/73c35e34-4f4c-48cd-81d5-3e75e02cc47a-kube-api-access-bvct9\") pod \"auto-csr-approver-29521462-stpx6\" (UID: \"73c35e34-4f4c-48cd-81d5-3e75e02cc47a\") " pod="openshift-infra/auto-csr-approver-29521462-stpx6" Feb 17 00:22:01 crc kubenswrapper[5109]: I0217 00:22:01.561352 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-h8g6s"] Feb 17 00:22:01 crc kubenswrapper[5109]: I0217 00:22:01.561555 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-h8g6s" Feb 17 00:22:01 crc kubenswrapper[5109]: I0217 00:22:01.563375 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-4dwxt\"" Feb 17 00:22:01 crc kubenswrapper[5109]: I0217 00:22:01.588706 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bvct9\" (UniqueName: \"kubernetes.io/projected/73c35e34-4f4c-48cd-81d5-3e75e02cc47a-kube-api-access-bvct9\") pod \"auto-csr-approver-29521462-stpx6\" (UID: \"73c35e34-4f4c-48cd-81d5-3e75e02cc47a\") " pod="openshift-infra/auto-csr-approver-29521462-stpx6" Feb 17 00:22:01 crc kubenswrapper[5109]: I0217 00:22:01.612397 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvct9\" (UniqueName: \"kubernetes.io/projected/73c35e34-4f4c-48cd-81d5-3e75e02cc47a-kube-api-access-bvct9\") pod \"auto-csr-approver-29521462-stpx6\" (UID: \"73c35e34-4f4c-48cd-81d5-3e75e02cc47a\") " pod="openshift-infra/auto-csr-approver-29521462-stpx6" Feb 17 00:22:01 crc kubenswrapper[5109]: I0217 00:22:01.685956 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521462-stpx6" Feb 17 00:22:01 crc kubenswrapper[5109]: I0217 00:22:01.691014 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e7f6207b-d886-4468-a19a-fa322372c3a1-bound-sa-token\") pod \"cert-manager-759f64656b-h8g6s\" (UID: \"e7f6207b-d886-4468-a19a-fa322372c3a1\") " pod="cert-manager/cert-manager-759f64656b-h8g6s" Feb 17 00:22:01 crc kubenswrapper[5109]: I0217 00:22:01.691623 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzn2f\" (UniqueName: \"kubernetes.io/projected/e7f6207b-d886-4468-a19a-fa322372c3a1-kube-api-access-mzn2f\") pod \"cert-manager-759f64656b-h8g6s\" (UID: \"e7f6207b-d886-4468-a19a-fa322372c3a1\") " pod="cert-manager/cert-manager-759f64656b-h8g6s" Feb 17 00:22:01 crc kubenswrapper[5109]: I0217 00:22:01.699021 5109 generic.go:358] "Generic (PLEG): container finished" podID="19039840-56d5-49c2-b68c-ec50dc56a282" containerID="22c960e67f4f028ae6319073b8299269a7c6878b282de0f150d6393a6e851c06" exitCode=0 Feb 17 00:22:01 crc kubenswrapper[5109]: I0217 00:22:01.699374 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9bnr" event={"ID":"19039840-56d5-49c2-b68c-ec50dc56a282","Type":"ContainerDied","Data":"22c960e67f4f028ae6319073b8299269a7c6878b282de0f150d6393a6e851c06"} Feb 17 00:22:01 crc kubenswrapper[5109]: I0217 00:22:01.792520 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mzn2f\" (UniqueName: \"kubernetes.io/projected/e7f6207b-d886-4468-a19a-fa322372c3a1-kube-api-access-mzn2f\") pod \"cert-manager-759f64656b-h8g6s\" (UID: \"e7f6207b-d886-4468-a19a-fa322372c3a1\") " pod="cert-manager/cert-manager-759f64656b-h8g6s" Feb 17 00:22:01 crc kubenswrapper[5109]: I0217 00:22:01.793379 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e7f6207b-d886-4468-a19a-fa322372c3a1-bound-sa-token\") pod \"cert-manager-759f64656b-h8g6s\" (UID: \"e7f6207b-d886-4468-a19a-fa322372c3a1\") " pod="cert-manager/cert-manager-759f64656b-h8g6s" Feb 17 00:22:01 crc kubenswrapper[5109]: I0217 00:22:01.813821 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e7f6207b-d886-4468-a19a-fa322372c3a1-bound-sa-token\") pod \"cert-manager-759f64656b-h8g6s\" (UID: \"e7f6207b-d886-4468-a19a-fa322372c3a1\") " pod="cert-manager/cert-manager-759f64656b-h8g6s" Feb 17 00:22:01 crc kubenswrapper[5109]: I0217 00:22:01.824620 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzn2f\" (UniqueName: \"kubernetes.io/projected/e7f6207b-d886-4468-a19a-fa322372c3a1-kube-api-access-mzn2f\") pod \"cert-manager-759f64656b-h8g6s\" (UID: \"e7f6207b-d886-4468-a19a-fa322372c3a1\") " pod="cert-manager/cert-manager-759f64656b-h8g6s" Feb 17 00:22:01 crc kubenswrapper[5109]: I0217 00:22:01.884405 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-h8g6s" Feb 17 00:22:02 crc kubenswrapper[5109]: I0217 00:22:02.150436 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9bnr" Feb 17 00:22:02 crc kubenswrapper[5109]: I0217 00:22:02.299228 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19039840-56d5-49c2-b68c-ec50dc56a282-utilities\") pod \"19039840-56d5-49c2-b68c-ec50dc56a282\" (UID: \"19039840-56d5-49c2-b68c-ec50dc56a282\") " Feb 17 00:22:02 crc kubenswrapper[5109]: I0217 00:22:02.299528 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bntsk\" (UniqueName: \"kubernetes.io/projected/19039840-56d5-49c2-b68c-ec50dc56a282-kube-api-access-bntsk\") pod \"19039840-56d5-49c2-b68c-ec50dc56a282\" (UID: \"19039840-56d5-49c2-b68c-ec50dc56a282\") " Feb 17 00:22:02 crc kubenswrapper[5109]: I0217 00:22:02.299617 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19039840-56d5-49c2-b68c-ec50dc56a282-catalog-content\") pod \"19039840-56d5-49c2-b68c-ec50dc56a282\" (UID: \"19039840-56d5-49c2-b68c-ec50dc56a282\") " Feb 17 00:22:02 crc kubenswrapper[5109]: I0217 00:22:02.301349 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19039840-56d5-49c2-b68c-ec50dc56a282-utilities" (OuterVolumeSpecName: "utilities") pod "19039840-56d5-49c2-b68c-ec50dc56a282" (UID: "19039840-56d5-49c2-b68c-ec50dc56a282"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:22:02 crc kubenswrapper[5109]: I0217 00:22:02.303190 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29521462-stpx6"] Feb 17 00:22:02 crc kubenswrapper[5109]: I0217 00:22:02.309886 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19039840-56d5-49c2-b68c-ec50dc56a282-kube-api-access-bntsk" (OuterVolumeSpecName: "kube-api-access-bntsk") pod "19039840-56d5-49c2-b68c-ec50dc56a282" (UID: "19039840-56d5-49c2-b68c-ec50dc56a282"). InnerVolumeSpecName "kube-api-access-bntsk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:22:02 crc kubenswrapper[5109]: I0217 00:22:02.386043 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19039840-56d5-49c2-b68c-ec50dc56a282-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19039840-56d5-49c2-b68c-ec50dc56a282" (UID: "19039840-56d5-49c2-b68c-ec50dc56a282"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:22:02 crc kubenswrapper[5109]: I0217 00:22:02.400950 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19039840-56d5-49c2-b68c-ec50dc56a282-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 00:22:02 crc kubenswrapper[5109]: I0217 00:22:02.401104 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bntsk\" (UniqueName: \"kubernetes.io/projected/19039840-56d5-49c2-b68c-ec50dc56a282-kube-api-access-bntsk\") on node \"crc\" DevicePath \"\"" Feb 17 00:22:02 crc kubenswrapper[5109]: I0217 00:22:02.401172 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19039840-56d5-49c2-b68c-ec50dc56a282-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 00:22:02 crc kubenswrapper[5109]: I0217 00:22:02.560269 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-h8g6s"] Feb 17 00:22:02 crc kubenswrapper[5109]: W0217 00:22:02.562112 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7f6207b_d886_4468_a19a_fa322372c3a1.slice/crio-591ff2138d34f61a6025c7e253caf3daf67caaa2142eaac93510789ddc07e15f WatchSource:0}: Error finding container 591ff2138d34f61a6025c7e253caf3daf67caaa2142eaac93510789ddc07e15f: Status 404 returned error can't find the container with id 591ff2138d34f61a6025c7e253caf3daf67caaa2142eaac93510789ddc07e15f Feb 17 00:22:02 crc kubenswrapper[5109]: I0217 00:22:02.705720 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-smnww" event={"ID":"614ff088-aa03-4a19-bdd0-00cecbe79da4","Type":"ContainerStarted","Data":"92c91a5df72047c280dff62c687fe19e26e45b86638dac9abcd9248bd29837c7"} Feb 17 00:22:02 crc kubenswrapper[5109]: I0217 00:22:02.707846 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9bnr" Feb 17 00:22:02 crc kubenswrapper[5109]: I0217 00:22:02.707865 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9bnr" event={"ID":"19039840-56d5-49c2-b68c-ec50dc56a282","Type":"ContainerDied","Data":"3195d873b8dd59acc3613963cf22738fd0122fc39e878473cb22c55dd7fa64dc"} Feb 17 00:22:02 crc kubenswrapper[5109]: I0217 00:22:02.707920 5109 scope.go:117] "RemoveContainer" containerID="22c960e67f4f028ae6319073b8299269a7c6878b282de0f150d6393a6e851c06" Feb 17 00:22:02 crc kubenswrapper[5109]: I0217 00:22:02.708698 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29521462-stpx6" event={"ID":"73c35e34-4f4c-48cd-81d5-3e75e02cc47a","Type":"ContainerStarted","Data":"a8ad16f3faadd1af9b394377b02c9a455342096328d544db162b466aac77b7f4"} Feb 17 00:22:02 crc kubenswrapper[5109]: I0217 00:22:02.711921 5109 generic.go:358] "Generic (PLEG): container finished" podID="dc9f509d-8e56-4537-b20e-a52f477f336f" containerID="885ab57f9d3165e753a136efc9ed58e3b94036925a0154bb59d52bc46f2fad67" exitCode=0 Feb 17 00:22:02 crc kubenswrapper[5109]: I0217 00:22:02.712074 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"dc9f509d-8e56-4537-b20e-a52f477f336f","Type":"ContainerDied","Data":"885ab57f9d3165e753a136efc9ed58e3b94036925a0154bb59d52bc46f2fad67"} Feb 17 00:22:02 crc kubenswrapper[5109]: I0217 00:22:02.722146 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-f5pbf" event={"ID":"dca37b2b-eb0d-4835-beb6-a53dde67a2f8","Type":"ContainerStarted","Data":"ba85f010c9abd43974dad467fb67c713bdbc84353b033eae9a6d3845f1777e50"} Feb 17 00:22:02 crc kubenswrapper[5109]: I0217 00:22:02.722319 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-597b96b99b-f5pbf" Feb 17 00:22:02 crc kubenswrapper[5109]: I0217 00:22:02.731749 5109 scope.go:117] "RemoveContainer" containerID="e51bb6a39c745535e5c74b89a7e880790c69dd67572f19121598bbe0df46a3a7" Feb 17 00:22:02 crc kubenswrapper[5109]: I0217 00:22:02.734564 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-h8g6s" event={"ID":"e7f6207b-d886-4468-a19a-fa322372c3a1","Type":"ContainerStarted","Data":"591ff2138d34f61a6025c7e253caf3daf67caaa2142eaac93510789ddc07e15f"} Feb 17 00:22:02 crc kubenswrapper[5109]: I0217 00:22:02.736506 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-8966b78d4-smnww" podStartSLOduration=6.984626553 podStartE2EDuration="17.736471709s" podCreationTimestamp="2026-02-17 00:21:45 +0000 UTC" firstStartedPulling="2026-02-17 00:21:51.378169816 +0000 UTC m=+782.709724584" lastFinishedPulling="2026-02-17 00:22:02.130014972 +0000 UTC m=+793.461569740" observedRunningTime="2026-02-17 00:22:02.730963383 +0000 UTC m=+794.062518141" watchObservedRunningTime="2026-02-17 00:22:02.736471709 +0000 UTC m=+794.068026517" Feb 17 00:22:02 crc kubenswrapper[5109]: I0217 00:22:02.748838 5109 generic.go:358] "Generic (PLEG): container finished" podID="f1f32a91-c284-4c23-8a2d-5fe6626b83e5" containerID="db8f23d2921ae7ed88d858c69aadc628384ff1ee8a724dc3d3fe13821a8d2d20" exitCode=0 Feb 17 00:22:02 crc kubenswrapper[5109]: I0217 00:22:02.749067 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"f1f32a91-c284-4c23-8a2d-5fe6626b83e5","Type":"ContainerDied","Data":"db8f23d2921ae7ed88d858c69aadc628384ff1ee8a724dc3d3fe13821a8d2d20"} Feb 17 00:22:02 crc kubenswrapper[5109]: I0217 00:22:02.793704 5109 scope.go:117] "RemoveContainer" containerID="d732148f3f7a2f6d2ef39e863e0302f4107e7eb1fbe3956f4e906ddbac730346" Feb 17 00:22:02 crc kubenswrapper[5109]: I0217 00:22:02.795241 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p9bnr"] Feb 17 00:22:02 crc kubenswrapper[5109]: I0217 00:22:02.804559 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p9bnr"] Feb 17 00:22:02 crc kubenswrapper[5109]: I0217 00:22:02.805469 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-597b96b99b-f5pbf" podStartSLOduration=4.468725096 podStartE2EDuration="19.805454508s" podCreationTimestamp="2026-02-17 00:21:43 +0000 UTC" firstStartedPulling="2026-02-17 00:21:46.810298631 +0000 UTC m=+778.141853399" lastFinishedPulling="2026-02-17 00:22:02.147028053 +0000 UTC m=+793.478582811" observedRunningTime="2026-02-17 00:22:02.799437859 +0000 UTC m=+794.130992617" watchObservedRunningTime="2026-02-17 00:22:02.805454508 +0000 UTC m=+794.137009266" Feb 17 00:22:02 crc kubenswrapper[5109]: I0217 00:22:02.830616 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-759f64656b-h8g6s" podStartSLOduration=1.8305524229999999 podStartE2EDuration="1.830552423s" podCreationTimestamp="2026-02-17 00:22:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:22:02.830123852 +0000 UTC m=+794.161678610" watchObservedRunningTime="2026-02-17 00:22:02.830552423 +0000 UTC m=+794.162107191" Feb 17 00:22:03 crc kubenswrapper[5109]: I0217 00:22:03.474526 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19039840-56d5-49c2-b68c-ec50dc56a282" path="/var/lib/kubelet/pods/19039840-56d5-49c2-b68c-ec50dc56a282/volumes" Feb 17 00:22:03 crc kubenswrapper[5109]: I0217 00:22:03.760189 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29521462-stpx6" event={"ID":"73c35e34-4f4c-48cd-81d5-3e75e02cc47a","Type":"ContainerStarted","Data":"67cfa28822a2f9a6fff59388f803c023da372a8bf6cb4e5789b56bdff2fe08ab"} Feb 17 00:22:03 crc kubenswrapper[5109]: I0217 00:22:03.763928 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"dc9f509d-8e56-4537-b20e-a52f477f336f","Type":"ContainerStarted","Data":"79d9a344a8ab0edb3818495cc3e4cceda1c6f7851be9b472e7d68af0d04a6adf"} Feb 17 00:22:03 crc kubenswrapper[5109]: I0217 00:22:03.764359 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:22:03 crc kubenswrapper[5109]: I0217 00:22:03.767966 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-h8g6s" event={"ID":"e7f6207b-d886-4468-a19a-fa322372c3a1","Type":"ContainerStarted","Data":"81a498068b080a7f5aaebfe6354d33165fd95d8e4df87b95b65d77973bea6588"} Feb 17 00:22:03 crc kubenswrapper[5109]: I0217 00:22:03.775126 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29521462-stpx6" podStartSLOduration=2.791239832 podStartE2EDuration="3.775110154s" podCreationTimestamp="2026-02-17 00:22:00 +0000 UTC" firstStartedPulling="2026-02-17 00:22:02.337572975 +0000 UTC m=+793.669127733" lastFinishedPulling="2026-02-17 00:22:03.321443297 +0000 UTC m=+794.652998055" observedRunningTime="2026-02-17 00:22:03.771354454 +0000 UTC m=+795.102909212" watchObservedRunningTime="2026-02-17 00:22:03.775110154 +0000 UTC m=+795.106664912" Feb 17 00:22:03 crc kubenswrapper[5109]: I0217 00:22:03.814664 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=14.924057266 podStartE2EDuration="27.813258705s" podCreationTimestamp="2026-02-17 00:21:36 +0000 UTC" firstStartedPulling="2026-02-17 00:21:38.948106329 +0000 UTC m=+770.279661087" lastFinishedPulling="2026-02-17 00:21:51.837307768 +0000 UTC m=+783.168862526" observedRunningTime="2026-02-17 00:22:03.807927884 +0000 UTC m=+795.139482642" watchObservedRunningTime="2026-02-17 00:22:03.813258705 +0000 UTC m=+795.144813463" Feb 17 00:22:04 crc kubenswrapper[5109]: I0217 00:22:04.778557 5109 generic.go:358] "Generic (PLEG): container finished" podID="73c35e34-4f4c-48cd-81d5-3e75e02cc47a" containerID="67cfa28822a2f9a6fff59388f803c023da372a8bf6cb4e5789b56bdff2fe08ab" exitCode=0 Feb 17 00:22:04 crc kubenswrapper[5109]: I0217 00:22:04.778734 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29521462-stpx6" event={"ID":"73c35e34-4f4c-48cd-81d5-3e75e02cc47a","Type":"ContainerDied","Data":"67cfa28822a2f9a6fff59388f803c023da372a8bf6cb4e5789b56bdff2fe08ab"} Feb 17 00:22:05 crc kubenswrapper[5109]: I0217 00:22:05.803002 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"f1f32a91-c284-4c23-8a2d-5fe6626b83e5","Type":"ContainerStarted","Data":"01af47a1461b6682b43a90a1debee4e110b863c22ab88cb5e2ef1a4f277ca2fd"} Feb 17 00:22:05 crc kubenswrapper[5109]: I0217 00:22:05.835373 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" podStartSLOduration=1.609765245 podStartE2EDuration="14.835349229s" podCreationTimestamp="2026-02-17 00:21:51 +0000 UTC" firstStartedPulling="2026-02-17 00:21:52.13426757 +0000 UTC m=+783.465822328" lastFinishedPulling="2026-02-17 00:22:05.359851554 +0000 UTC m=+796.691406312" observedRunningTime="2026-02-17 00:22:05.831498867 +0000 UTC m=+797.163053655" watchObservedRunningTime="2026-02-17 00:22:05.835349229 +0000 UTC m=+797.166904007" Feb 17 00:22:06 crc kubenswrapper[5109]: I0217 00:22:06.091894 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521462-stpx6" Feb 17 00:22:06 crc kubenswrapper[5109]: I0217 00:22:06.164225 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvct9\" (UniqueName: \"kubernetes.io/projected/73c35e34-4f4c-48cd-81d5-3e75e02cc47a-kube-api-access-bvct9\") pod \"73c35e34-4f4c-48cd-81d5-3e75e02cc47a\" (UID: \"73c35e34-4f4c-48cd-81d5-3e75e02cc47a\") " Feb 17 00:22:06 crc kubenswrapper[5109]: I0217 00:22:06.169328 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73c35e34-4f4c-48cd-81d5-3e75e02cc47a-kube-api-access-bvct9" (OuterVolumeSpecName: "kube-api-access-bvct9") pod "73c35e34-4f4c-48cd-81d5-3e75e02cc47a" (UID: "73c35e34-4f4c-48cd-81d5-3e75e02cc47a"). InnerVolumeSpecName "kube-api-access-bvct9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:22:06 crc kubenswrapper[5109]: I0217 00:22:06.265289 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bvct9\" (UniqueName: \"kubernetes.io/projected/73c35e34-4f4c-48cd-81d5-3e75e02cc47a-kube-api-access-bvct9\") on node \"crc\" DevicePath \"\"" Feb 17 00:22:06 crc kubenswrapper[5109]: I0217 00:22:06.815524 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521462-stpx6" Feb 17 00:22:06 crc kubenswrapper[5109]: I0217 00:22:06.817541 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29521462-stpx6" event={"ID":"73c35e34-4f4c-48cd-81d5-3e75e02cc47a","Type":"ContainerDied","Data":"a8ad16f3faadd1af9b394377b02c9a455342096328d544db162b466aac77b7f4"} Feb 17 00:22:06 crc kubenswrapper[5109]: I0217 00:22:06.817605 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8ad16f3faadd1af9b394377b02c9a455342096328d544db162b466aac77b7f4" Feb 17 00:22:06 crc kubenswrapper[5109]: I0217 00:22:06.837672 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29521456-hwxxh"] Feb 17 00:22:06 crc kubenswrapper[5109]: I0217 00:22:06.849441 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29521456-hwxxh"] Feb 17 00:22:07 crc kubenswrapper[5109]: I0217 00:22:07.472945 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5d6f33d-c80d-442d-9f91-d0162e328c59" path="/var/lib/kubelet/pods/d5d6f33d-c80d-442d-9f91-d0162e328c59/volumes" Feb 17 00:22:08 crc kubenswrapper[5109]: I0217 00:22:08.157333 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661gq25f"] Feb 17 00:22:08 crc kubenswrapper[5109]: I0217 00:22:08.158158 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="19039840-56d5-49c2-b68c-ec50dc56a282" containerName="extract-utilities" Feb 17 00:22:08 crc kubenswrapper[5109]: I0217 00:22:08.158175 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="19039840-56d5-49c2-b68c-ec50dc56a282" containerName="extract-utilities" Feb 17 00:22:08 crc kubenswrapper[5109]: I0217 00:22:08.158190 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="19039840-56d5-49c2-b68c-ec50dc56a282" containerName="extract-content" Feb 17 00:22:08 crc kubenswrapper[5109]: I0217 00:22:08.158197 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="19039840-56d5-49c2-b68c-ec50dc56a282" containerName="extract-content" Feb 17 00:22:08 crc kubenswrapper[5109]: I0217 00:22:08.158213 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="19039840-56d5-49c2-b68c-ec50dc56a282" containerName="registry-server" Feb 17 00:22:08 crc kubenswrapper[5109]: I0217 00:22:08.158221 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="19039840-56d5-49c2-b68c-ec50dc56a282" containerName="registry-server" Feb 17 00:22:08 crc kubenswrapper[5109]: I0217 00:22:08.158246 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="73c35e34-4f4c-48cd-81d5-3e75e02cc47a" containerName="oc" Feb 17 00:22:08 crc kubenswrapper[5109]: I0217 00:22:08.158254 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="73c35e34-4f4c-48cd-81d5-3e75e02cc47a" containerName="oc" Feb 17 00:22:08 crc kubenswrapper[5109]: I0217 00:22:08.158360 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="73c35e34-4f4c-48cd-81d5-3e75e02cc47a" containerName="oc" Feb 17 00:22:08 crc kubenswrapper[5109]: I0217 00:22:08.158386 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="19039840-56d5-49c2-b68c-ec50dc56a282" containerName="registry-server" Feb 17 00:22:08 crc kubenswrapper[5109]: I0217 00:22:08.164809 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661gq25f" Feb 17 00:22:08 crc kubenswrapper[5109]: I0217 00:22:08.174464 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661gq25f"] Feb 17 00:22:08 crc kubenswrapper[5109]: I0217 00:22:08.189474 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/99186f31-c7d2-43c8-b95a-40212c4dab28-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661gq25f\" (UID: \"99186f31-c7d2-43c8-b95a-40212c4dab28\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661gq25f" Feb 17 00:22:08 crc kubenswrapper[5109]: I0217 00:22:08.189564 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h72xs\" (UniqueName: \"kubernetes.io/projected/99186f31-c7d2-43c8-b95a-40212c4dab28-kube-api-access-h72xs\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661gq25f\" (UID: \"99186f31-c7d2-43c8-b95a-40212c4dab28\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661gq25f" Feb 17 00:22:08 crc kubenswrapper[5109]: I0217 00:22:08.189623 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/99186f31-c7d2-43c8-b95a-40212c4dab28-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661gq25f\" (UID: \"99186f31-c7d2-43c8-b95a-40212c4dab28\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661gq25f" Feb 17 00:22:08 crc kubenswrapper[5109]: I0217 00:22:08.291007 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/99186f31-c7d2-43c8-b95a-40212c4dab28-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661gq25f\" (UID: \"99186f31-c7d2-43c8-b95a-40212c4dab28\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661gq25f" Feb 17 00:22:08 crc kubenswrapper[5109]: I0217 00:22:08.291321 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h72xs\" (UniqueName: \"kubernetes.io/projected/99186f31-c7d2-43c8-b95a-40212c4dab28-kube-api-access-h72xs\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661gq25f\" (UID: \"99186f31-c7d2-43c8-b95a-40212c4dab28\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661gq25f" Feb 17 00:22:08 crc kubenswrapper[5109]: I0217 00:22:08.291346 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/99186f31-c7d2-43c8-b95a-40212c4dab28-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661gq25f\" (UID: \"99186f31-c7d2-43c8-b95a-40212c4dab28\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661gq25f" Feb 17 00:22:08 crc kubenswrapper[5109]: I0217 00:22:08.291524 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/99186f31-c7d2-43c8-b95a-40212c4dab28-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661gq25f\" (UID: \"99186f31-c7d2-43c8-b95a-40212c4dab28\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661gq25f" Feb 17 00:22:08 crc kubenswrapper[5109]: I0217 00:22:08.291690 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/99186f31-c7d2-43c8-b95a-40212c4dab28-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661gq25f\" (UID: \"99186f31-c7d2-43c8-b95a-40212c4dab28\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661gq25f" Feb 17 00:22:08 crc kubenswrapper[5109]: I0217 00:22:08.317759 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h72xs\" (UniqueName: \"kubernetes.io/projected/99186f31-c7d2-43c8-b95a-40212c4dab28-kube-api-access-h72xs\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661gq25f\" (UID: \"99186f31-c7d2-43c8-b95a-40212c4dab28\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661gq25f" Feb 17 00:22:08 crc kubenswrapper[5109]: I0217 00:22:08.512255 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661gq25f" Feb 17 00:22:08 crc kubenswrapper[5109]: I0217 00:22:08.771867 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-597b96b99b-f5pbf" Feb 17 00:22:08 crc kubenswrapper[5109]: I0217 00:22:08.786547 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661gq25f"] Feb 17 00:22:08 crc kubenswrapper[5109]: I0217 00:22:08.827625 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661gq25f" event={"ID":"99186f31-c7d2-43c8-b95a-40212c4dab28","Type":"ContainerStarted","Data":"0ae178836a3a67727b57e4c400123b26c166a350a8cb29129167b343acbad31c"} Feb 17 00:22:09 crc kubenswrapper[5109]: I0217 00:22:09.835479 5109 generic.go:358] "Generic (PLEG): container finished" podID="99186f31-c7d2-43c8-b95a-40212c4dab28" containerID="9d65deb00d79cf5d0197442251d2e5cd89058d2ddc476633e2ca78e05bc60934" exitCode=0 Feb 17 00:22:09 crc kubenswrapper[5109]: I0217 00:22:09.835846 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661gq25f" event={"ID":"99186f31-c7d2-43c8-b95a-40212c4dab28","Type":"ContainerDied","Data":"9d65deb00d79cf5d0197442251d2e5cd89058d2ddc476633e2ca78e05bc60934"} Feb 17 00:22:11 crc kubenswrapper[5109]: I0217 00:22:11.852286 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661gq25f" event={"ID":"99186f31-c7d2-43c8-b95a-40212c4dab28","Type":"ContainerStarted","Data":"43f4df1c01065861a41db7240c2b304288e964c24893407c5953b2cbe06190b3"} Feb 17 00:22:12 crc kubenswrapper[5109]: I0217 00:22:12.861779 5109 generic.go:358] "Generic (PLEG): container finished" podID="99186f31-c7d2-43c8-b95a-40212c4dab28" containerID="43f4df1c01065861a41db7240c2b304288e964c24893407c5953b2cbe06190b3" exitCode=0 Feb 17 00:22:12 crc kubenswrapper[5109]: I0217 00:22:12.861910 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661gq25f" event={"ID":"99186f31-c7d2-43c8-b95a-40212c4dab28","Type":"ContainerDied","Data":"43f4df1c01065861a41db7240c2b304288e964c24893407c5953b2cbe06190b3"} Feb 17 00:22:13 crc kubenswrapper[5109]: I0217 00:22:13.870638 5109 generic.go:358] "Generic (PLEG): container finished" podID="99186f31-c7d2-43c8-b95a-40212c4dab28" containerID="937d3623a1b114d5bbd7af94dcd3904828b08fa15243598801b84a4c6454fc2e" exitCode=0 Feb 17 00:22:13 crc kubenswrapper[5109]: I0217 00:22:13.870749 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661gq25f" event={"ID":"99186f31-c7d2-43c8-b95a-40212c4dab28","Type":"ContainerDied","Data":"937d3623a1b114d5bbd7af94dcd3904828b08fa15243598801b84a4c6454fc2e"} Feb 17 00:22:14 crc kubenswrapper[5109]: I0217 00:22:14.864313 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="dc9f509d-8e56-4537-b20e-a52f477f336f" containerName="elasticsearch" probeResult="failure" output=< Feb 17 00:22:14 crc kubenswrapper[5109]: {"timestamp": "2026-02-17T00:22:14+00:00", "message": "readiness probe failed", "curl_rc": "7"} Feb 17 00:22:14 crc kubenswrapper[5109]: > Feb 17 00:22:15 crc kubenswrapper[5109]: I0217 00:22:15.226335 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661gq25f" Feb 17 00:22:15 crc kubenswrapper[5109]: I0217 00:22:15.291561 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/99186f31-c7d2-43c8-b95a-40212c4dab28-util\") pod \"99186f31-c7d2-43c8-b95a-40212c4dab28\" (UID: \"99186f31-c7d2-43c8-b95a-40212c4dab28\") " Feb 17 00:22:15 crc kubenswrapper[5109]: I0217 00:22:15.291707 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h72xs\" (UniqueName: \"kubernetes.io/projected/99186f31-c7d2-43c8-b95a-40212c4dab28-kube-api-access-h72xs\") pod \"99186f31-c7d2-43c8-b95a-40212c4dab28\" (UID: \"99186f31-c7d2-43c8-b95a-40212c4dab28\") " Feb 17 00:22:15 crc kubenswrapper[5109]: I0217 00:22:15.291757 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/99186f31-c7d2-43c8-b95a-40212c4dab28-bundle\") pod \"99186f31-c7d2-43c8-b95a-40212c4dab28\" (UID: \"99186f31-c7d2-43c8-b95a-40212c4dab28\") " Feb 17 00:22:15 crc kubenswrapper[5109]: I0217 00:22:15.292789 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99186f31-c7d2-43c8-b95a-40212c4dab28-bundle" (OuterVolumeSpecName: "bundle") pod "99186f31-c7d2-43c8-b95a-40212c4dab28" (UID: "99186f31-c7d2-43c8-b95a-40212c4dab28"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:22:15 crc kubenswrapper[5109]: I0217 00:22:15.304784 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99186f31-c7d2-43c8-b95a-40212c4dab28-util" (OuterVolumeSpecName: "util") pod "99186f31-c7d2-43c8-b95a-40212c4dab28" (UID: "99186f31-c7d2-43c8-b95a-40212c4dab28"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:22:15 crc kubenswrapper[5109]: I0217 00:22:15.306569 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99186f31-c7d2-43c8-b95a-40212c4dab28-kube-api-access-h72xs" (OuterVolumeSpecName: "kube-api-access-h72xs") pod "99186f31-c7d2-43c8-b95a-40212c4dab28" (UID: "99186f31-c7d2-43c8-b95a-40212c4dab28"). InnerVolumeSpecName "kube-api-access-h72xs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:22:15 crc kubenswrapper[5109]: I0217 00:22:15.393981 5109 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/99186f31-c7d2-43c8-b95a-40212c4dab28-util\") on node \"crc\" DevicePath \"\"" Feb 17 00:22:15 crc kubenswrapper[5109]: I0217 00:22:15.394024 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h72xs\" (UniqueName: \"kubernetes.io/projected/99186f31-c7d2-43c8-b95a-40212c4dab28-kube-api-access-h72xs\") on node \"crc\" DevicePath \"\"" Feb 17 00:22:15 crc kubenswrapper[5109]: I0217 00:22:15.394037 5109 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/99186f31-c7d2-43c8-b95a-40212c4dab28-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:22:15 crc kubenswrapper[5109]: I0217 00:22:15.888768 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661gq25f" event={"ID":"99186f31-c7d2-43c8-b95a-40212c4dab28","Type":"ContainerDied","Data":"0ae178836a3a67727b57e4c400123b26c166a350a8cb29129167b343acbad31c"} Feb 17 00:22:15 crc kubenswrapper[5109]: I0217 00:22:15.888819 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661gq25f" Feb 17 00:22:15 crc kubenswrapper[5109]: I0217 00:22:15.888835 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ae178836a3a67727b57e4c400123b26c166a350a8cb29129167b343acbad31c" Feb 17 00:22:20 crc kubenswrapper[5109]: I0217 00:22:20.357131 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Feb 17 00:22:20 crc kubenswrapper[5109]: I0217 00:22:20.816710 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-97b85656c-ssbdj"] Feb 17 00:22:20 crc kubenswrapper[5109]: I0217 00:22:20.817864 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="99186f31-c7d2-43c8-b95a-40212c4dab28" containerName="util" Feb 17 00:22:20 crc kubenswrapper[5109]: I0217 00:22:20.817903 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="99186f31-c7d2-43c8-b95a-40212c4dab28" containerName="util" Feb 17 00:22:20 crc kubenswrapper[5109]: I0217 00:22:20.817924 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="99186f31-c7d2-43c8-b95a-40212c4dab28" containerName="pull" Feb 17 00:22:20 crc kubenswrapper[5109]: I0217 00:22:20.817933 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="99186f31-c7d2-43c8-b95a-40212c4dab28" containerName="pull" Feb 17 00:22:20 crc kubenswrapper[5109]: I0217 00:22:20.817967 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="99186f31-c7d2-43c8-b95a-40212c4dab28" containerName="extract" Feb 17 00:22:20 crc kubenswrapper[5109]: I0217 00:22:20.817976 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="99186f31-c7d2-43c8-b95a-40212c4dab28" containerName="extract" Feb 17 00:22:20 crc kubenswrapper[5109]: I0217 00:22:20.818119 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="99186f31-c7d2-43c8-b95a-40212c4dab28" containerName="extract" Feb 17 00:22:20 crc kubenswrapper[5109]: I0217 00:22:20.822663 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-97b85656c-ssbdj" Feb 17 00:22:20 crc kubenswrapper[5109]: I0217 00:22:20.825164 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-dockercfg-rjplq\"" Feb 17 00:22:20 crc kubenswrapper[5109]: I0217 00:22:20.831470 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-97b85656c-ssbdj"] Feb 17 00:22:20 crc kubenswrapper[5109]: I0217 00:22:20.988576 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx4gm\" (UniqueName: \"kubernetes.io/projected/04580c9e-33df-4a1a-9ac8-6af08f9682ae-kube-api-access-zx4gm\") pod \"smart-gateway-operator-97b85656c-ssbdj\" (UID: \"04580c9e-33df-4a1a-9ac8-6af08f9682ae\") " pod="service-telemetry/smart-gateway-operator-97b85656c-ssbdj" Feb 17 00:22:20 crc kubenswrapper[5109]: I0217 00:22:20.988653 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/04580c9e-33df-4a1a-9ac8-6af08f9682ae-runner\") pod \"smart-gateway-operator-97b85656c-ssbdj\" (UID: \"04580c9e-33df-4a1a-9ac8-6af08f9682ae\") " pod="service-telemetry/smart-gateway-operator-97b85656c-ssbdj" Feb 17 00:22:21 crc kubenswrapper[5109]: I0217 00:22:21.090651 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zx4gm\" (UniqueName: \"kubernetes.io/projected/04580c9e-33df-4a1a-9ac8-6af08f9682ae-kube-api-access-zx4gm\") pod \"smart-gateway-operator-97b85656c-ssbdj\" (UID: \"04580c9e-33df-4a1a-9ac8-6af08f9682ae\") " pod="service-telemetry/smart-gateway-operator-97b85656c-ssbdj" Feb 17 00:22:21 crc kubenswrapper[5109]: I0217 00:22:21.090730 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/04580c9e-33df-4a1a-9ac8-6af08f9682ae-runner\") pod \"smart-gateway-operator-97b85656c-ssbdj\" (UID: \"04580c9e-33df-4a1a-9ac8-6af08f9682ae\") " pod="service-telemetry/smart-gateway-operator-97b85656c-ssbdj" Feb 17 00:22:21 crc kubenswrapper[5109]: I0217 00:22:21.092089 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/04580c9e-33df-4a1a-9ac8-6af08f9682ae-runner\") pod \"smart-gateway-operator-97b85656c-ssbdj\" (UID: \"04580c9e-33df-4a1a-9ac8-6af08f9682ae\") " pod="service-telemetry/smart-gateway-operator-97b85656c-ssbdj" Feb 17 00:22:21 crc kubenswrapper[5109]: I0217 00:22:21.112519 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx4gm\" (UniqueName: \"kubernetes.io/projected/04580c9e-33df-4a1a-9ac8-6af08f9682ae-kube-api-access-zx4gm\") pod \"smart-gateway-operator-97b85656c-ssbdj\" (UID: \"04580c9e-33df-4a1a-9ac8-6af08f9682ae\") " pod="service-telemetry/smart-gateway-operator-97b85656c-ssbdj" Feb 17 00:22:21 crc kubenswrapper[5109]: I0217 00:22:21.152788 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-97b85656c-ssbdj" Feb 17 00:22:21 crc kubenswrapper[5109]: I0217 00:22:21.421510 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-97b85656c-ssbdj"] Feb 17 00:22:21 crc kubenswrapper[5109]: I0217 00:22:21.935116 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-97b85656c-ssbdj" event={"ID":"04580c9e-33df-4a1a-9ac8-6af08f9682ae","Type":"ContainerStarted","Data":"7d85edb24aae11a54dce64e2a8ca47784159757eb81e6a4262ce31a9e4f9733a"} Feb 17 00:22:39 crc kubenswrapper[5109]: I0217 00:22:39.073457 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-97b85656c-ssbdj" event={"ID":"04580c9e-33df-4a1a-9ac8-6af08f9682ae","Type":"ContainerStarted","Data":"eff7575ac86b84f540370148dc68e18a591411cd3fa626b483d29c9b1ca275ce"} Feb 17 00:22:39 crc kubenswrapper[5109]: I0217 00:22:39.107499 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-97b85656c-ssbdj" podStartSLOduration=2.497651841 podStartE2EDuration="19.107473949s" podCreationTimestamp="2026-02-17 00:22:20 +0000 UTC" firstStartedPulling="2026-02-17 00:22:21.427326767 +0000 UTC m=+812.758881525" lastFinishedPulling="2026-02-17 00:22:38.037148875 +0000 UTC m=+829.368703633" observedRunningTime="2026-02-17 00:22:39.098005338 +0000 UTC m=+830.429560126" watchObservedRunningTime="2026-02-17 00:22:39.107473949 +0000 UTC m=+830.439028747" Feb 17 00:22:50 crc kubenswrapper[5109]: I0217 00:22:50.366273 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Feb 17 00:22:50 crc kubenswrapper[5109]: I0217 00:22:50.383120 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Feb 17 00:22:50 crc kubenswrapper[5109]: I0217 00:22:50.383295 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 17 00:22:50 crc kubenswrapper[5109]: I0217 00:22:50.386457 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-catalog-configmap-partition-1\"" Feb 17 00:22:50 crc kubenswrapper[5109]: I0217 00:22:50.418760 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv29g\" (UniqueName: \"kubernetes.io/projected/1305cf78-6b8c-4abc-9e2a-bd1aa0947bc1-kube-api-access-mv29g\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"1305cf78-6b8c-4abc-9e2a-bd1aa0947bc1\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 17 00:22:50 crc kubenswrapper[5109]: I0217 00:22:50.418907 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/1305cf78-6b8c-4abc-9e2a-bd1aa0947bc1-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"1305cf78-6b8c-4abc-9e2a-bd1aa0947bc1\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 17 00:22:50 crc kubenswrapper[5109]: I0217 00:22:50.419009 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/1305cf78-6b8c-4abc-9e2a-bd1aa0947bc1-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"1305cf78-6b8c-4abc-9e2a-bd1aa0947bc1\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 17 00:22:50 crc kubenswrapper[5109]: I0217 00:22:50.520410 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/1305cf78-6b8c-4abc-9e2a-bd1aa0947bc1-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"1305cf78-6b8c-4abc-9e2a-bd1aa0947bc1\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 17 00:22:50 crc kubenswrapper[5109]: I0217 00:22:50.520500 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/1305cf78-6b8c-4abc-9e2a-bd1aa0947bc1-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"1305cf78-6b8c-4abc-9e2a-bd1aa0947bc1\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 17 00:22:50 crc kubenswrapper[5109]: I0217 00:22:50.521695 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mv29g\" (UniqueName: \"kubernetes.io/projected/1305cf78-6b8c-4abc-9e2a-bd1aa0947bc1-kube-api-access-mv29g\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"1305cf78-6b8c-4abc-9e2a-bd1aa0947bc1\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 17 00:22:50 crc kubenswrapper[5109]: I0217 00:22:50.521769 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/1305cf78-6b8c-4abc-9e2a-bd1aa0947bc1-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"1305cf78-6b8c-4abc-9e2a-bd1aa0947bc1\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 17 00:22:50 crc kubenswrapper[5109]: I0217 00:22:50.523748 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/1305cf78-6b8c-4abc-9e2a-bd1aa0947bc1-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"1305cf78-6b8c-4abc-9e2a-bd1aa0947bc1\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 17 00:22:50 crc kubenswrapper[5109]: I0217 00:22:50.547497 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv29g\" (UniqueName: \"kubernetes.io/projected/1305cf78-6b8c-4abc-9e2a-bd1aa0947bc1-kube-api-access-mv29g\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"1305cf78-6b8c-4abc-9e2a-bd1aa0947bc1\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 17 00:22:50 crc kubenswrapper[5109]: I0217 00:22:50.713832 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 17 00:22:50 crc kubenswrapper[5109]: I0217 00:22:50.968453 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Feb 17 00:22:50 crc kubenswrapper[5109]: W0217 00:22:50.970407 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1305cf78_6b8c_4abc_9e2a_bd1aa0947bc1.slice/crio-ff8b56c59829c77297c73b869e58ef84596319b453183fe9351b0c935e3b0a2b WatchSource:0}: Error finding container ff8b56c59829c77297c73b869e58ef84596319b453183fe9351b0c935e3b0a2b: Status 404 returned error can't find the container with id ff8b56c59829c77297c73b869e58ef84596319b453183fe9351b0c935e3b0a2b Feb 17 00:22:51 crc kubenswrapper[5109]: I0217 00:22:51.209392 5109 generic.go:358] "Generic (PLEG): container finished" podID="1305cf78-6b8c-4abc-9e2a-bd1aa0947bc1" containerID="b7f402076154ab9296d8bf5742969cd53c3e8acf90f66083944bb0c62c3591be" exitCode=0 Feb 17 00:22:51 crc kubenswrapper[5109]: I0217 00:22:51.209440 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"1305cf78-6b8c-4abc-9e2a-bd1aa0947bc1","Type":"ContainerDied","Data":"b7f402076154ab9296d8bf5742969cd53c3e8acf90f66083944bb0c62c3591be"} Feb 17 00:22:51 crc kubenswrapper[5109]: I0217 00:22:51.209824 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"1305cf78-6b8c-4abc-9e2a-bd1aa0947bc1","Type":"ContainerStarted","Data":"ff8b56c59829c77297c73b869e58ef84596319b453183fe9351b0c935e3b0a2b"} Feb 17 00:22:51 crc kubenswrapper[5109]: I0217 00:22:51.754713 5109 scope.go:117] "RemoveContainer" containerID="605a180c05f6ed5f473b5bf1522a5fa3e95c8563bc2ca7b066ecebfedd9acb73" Feb 17 00:22:52 crc kubenswrapper[5109]: I0217 00:22:52.219648 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"1305cf78-6b8c-4abc-9e2a-bd1aa0947bc1","Type":"ContainerStarted","Data":"b3ff6c5eba0095379f81627b5d04e02f00bdbeef5dbfa173b6d468458d1c75a4"} Feb 17 00:22:52 crc kubenswrapper[5109]: I0217 00:22:52.239193 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" podStartSLOduration=1.760403932 podStartE2EDuration="2.239172263s" podCreationTimestamp="2026-02-17 00:22:50 +0000 UTC" firstStartedPulling="2026-02-17 00:22:51.210537815 +0000 UTC m=+842.542092573" lastFinishedPulling="2026-02-17 00:22:51.689306106 +0000 UTC m=+843.020860904" observedRunningTime="2026-02-17 00:22:52.232111276 +0000 UTC m=+843.563666064" watchObservedRunningTime="2026-02-17 00:22:52.239172263 +0000 UTC m=+843.570727031" Feb 17 00:22:53 crc kubenswrapper[5109]: I0217 00:22:53.780570 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5"] Feb 17 00:22:53 crc kubenswrapper[5109]: I0217 00:22:53.790561 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5" Feb 17 00:22:53 crc kubenswrapper[5109]: I0217 00:22:53.794420 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Feb 17 00:22:53 crc kubenswrapper[5109]: I0217 00:22:53.795308 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5"] Feb 17 00:22:53 crc kubenswrapper[5109]: I0217 00:22:53.885928 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr2f7\" (UniqueName: \"kubernetes.io/projected/72c6aa0c-3846-4057-be15-63d2e9e5f270-kube-api-access-wr2f7\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5\" (UID: \"72c6aa0c-3846-4057-be15-63d2e9e5f270\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5" Feb 17 00:22:53 crc kubenswrapper[5109]: I0217 00:22:53.886085 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/72c6aa0c-3846-4057-be15-63d2e9e5f270-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5\" (UID: \"72c6aa0c-3846-4057-be15-63d2e9e5f270\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5" Feb 17 00:22:53 crc kubenswrapper[5109]: I0217 00:22:53.886166 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/72c6aa0c-3846-4057-be15-63d2e9e5f270-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5\" (UID: \"72c6aa0c-3846-4057-be15-63d2e9e5f270\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5" Feb 17 00:22:53 crc kubenswrapper[5109]: I0217 00:22:53.987199 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wr2f7\" (UniqueName: \"kubernetes.io/projected/72c6aa0c-3846-4057-be15-63d2e9e5f270-kube-api-access-wr2f7\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5\" (UID: \"72c6aa0c-3846-4057-be15-63d2e9e5f270\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5" Feb 17 00:22:53 crc kubenswrapper[5109]: I0217 00:22:53.987360 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/72c6aa0c-3846-4057-be15-63d2e9e5f270-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5\" (UID: \"72c6aa0c-3846-4057-be15-63d2e9e5f270\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5" Feb 17 00:22:53 crc kubenswrapper[5109]: I0217 00:22:53.987441 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/72c6aa0c-3846-4057-be15-63d2e9e5f270-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5\" (UID: \"72c6aa0c-3846-4057-be15-63d2e9e5f270\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5" Feb 17 00:22:53 crc kubenswrapper[5109]: I0217 00:22:53.988294 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/72c6aa0c-3846-4057-be15-63d2e9e5f270-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5\" (UID: \"72c6aa0c-3846-4057-be15-63d2e9e5f270\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5" Feb 17 00:22:53 crc kubenswrapper[5109]: I0217 00:22:53.988452 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/72c6aa0c-3846-4057-be15-63d2e9e5f270-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5\" (UID: \"72c6aa0c-3846-4057-be15-63d2e9e5f270\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5" Feb 17 00:22:54 crc kubenswrapper[5109]: I0217 00:22:54.024771 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr2f7\" (UniqueName: \"kubernetes.io/projected/72c6aa0c-3846-4057-be15-63d2e9e5f270-kube-api-access-wr2f7\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5\" (UID: \"72c6aa0c-3846-4057-be15-63d2e9e5f270\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5" Feb 17 00:22:54 crc kubenswrapper[5109]: I0217 00:22:54.127553 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5" Feb 17 00:22:54 crc kubenswrapper[5109]: I0217 00:22:54.440052 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5"] Feb 17 00:22:54 crc kubenswrapper[5109]: I0217 00:22:54.554287 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p8d57"] Feb 17 00:22:54 crc kubenswrapper[5109]: I0217 00:22:54.573468 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p8d57"] Feb 17 00:22:54 crc kubenswrapper[5109]: I0217 00:22:54.573665 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p8d57" Feb 17 00:22:54 crc kubenswrapper[5109]: I0217 00:22:54.598053 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4e5132f-4971-4da9-a672-d47dd175706b-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p8d57\" (UID: \"a4e5132f-4971-4da9-a672-d47dd175706b\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p8d57" Feb 17 00:22:54 crc kubenswrapper[5109]: I0217 00:22:54.598159 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg4vd\" (UniqueName: \"kubernetes.io/projected/a4e5132f-4971-4da9-a672-d47dd175706b-kube-api-access-cg4vd\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p8d57\" (UID: \"a4e5132f-4971-4da9-a672-d47dd175706b\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p8d57" Feb 17 00:22:54 crc kubenswrapper[5109]: I0217 00:22:54.598300 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4e5132f-4971-4da9-a672-d47dd175706b-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p8d57\" (UID: \"a4e5132f-4971-4da9-a672-d47dd175706b\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p8d57" Feb 17 00:22:54 crc kubenswrapper[5109]: I0217 00:22:54.699465 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4e5132f-4971-4da9-a672-d47dd175706b-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p8d57\" (UID: \"a4e5132f-4971-4da9-a672-d47dd175706b\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p8d57" Feb 17 00:22:54 crc kubenswrapper[5109]: I0217 00:22:54.699850 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cg4vd\" (UniqueName: \"kubernetes.io/projected/a4e5132f-4971-4da9-a672-d47dd175706b-kube-api-access-cg4vd\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p8d57\" (UID: \"a4e5132f-4971-4da9-a672-d47dd175706b\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p8d57" Feb 17 00:22:54 crc kubenswrapper[5109]: I0217 00:22:54.699909 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4e5132f-4971-4da9-a672-d47dd175706b-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p8d57\" (UID: \"a4e5132f-4971-4da9-a672-d47dd175706b\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p8d57" Feb 17 00:22:54 crc kubenswrapper[5109]: I0217 00:22:54.700170 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4e5132f-4971-4da9-a672-d47dd175706b-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p8d57\" (UID: \"a4e5132f-4971-4da9-a672-d47dd175706b\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p8d57" Feb 17 00:22:54 crc kubenswrapper[5109]: I0217 00:22:54.700221 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4e5132f-4971-4da9-a672-d47dd175706b-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p8d57\" (UID: \"a4e5132f-4971-4da9-a672-d47dd175706b\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p8d57" Feb 17 00:22:54 crc kubenswrapper[5109]: I0217 00:22:54.718835 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg4vd\" (UniqueName: \"kubernetes.io/projected/a4e5132f-4971-4da9-a672-d47dd175706b-kube-api-access-cg4vd\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p8d57\" (UID: \"a4e5132f-4971-4da9-a672-d47dd175706b\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p8d57" Feb 17 00:22:54 crc kubenswrapper[5109]: I0217 00:22:54.888412 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p8d57" Feb 17 00:22:55 crc kubenswrapper[5109]: I0217 00:22:55.143993 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p8d57"] Feb 17 00:22:55 crc kubenswrapper[5109]: W0217 00:22:55.154789 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4e5132f_4971_4da9_a672_d47dd175706b.slice/crio-ca37c68c8f6050d3f5ebf242ffd70e3d1e3d29914771489983a74f2085c0298d WatchSource:0}: Error finding container ca37c68c8f6050d3f5ebf242ffd70e3d1e3d29914771489983a74f2085c0298d: Status 404 returned error can't find the container with id ca37c68c8f6050d3f5ebf242ffd70e3d1e3d29914771489983a74f2085c0298d Feb 17 00:22:55 crc kubenswrapper[5109]: I0217 00:22:55.241082 5109 generic.go:358] "Generic (PLEG): container finished" podID="72c6aa0c-3846-4057-be15-63d2e9e5f270" containerID="e43f8265da65263b1f9c7adb54a08f0387bc4f140ddede2efd10e94b210c5735" exitCode=0 Feb 17 00:22:55 crc kubenswrapper[5109]: I0217 00:22:55.241344 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5" event={"ID":"72c6aa0c-3846-4057-be15-63d2e9e5f270","Type":"ContainerDied","Data":"e43f8265da65263b1f9c7adb54a08f0387bc4f140ddede2efd10e94b210c5735"} Feb 17 00:22:55 crc kubenswrapper[5109]: I0217 00:22:55.241379 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5" event={"ID":"72c6aa0c-3846-4057-be15-63d2e9e5f270","Type":"ContainerStarted","Data":"acd582a3a61f56523a4f945a859fa91bda4443a09096a4d790767ce286d66d57"} Feb 17 00:22:55 crc kubenswrapper[5109]: I0217 00:22:55.245050 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p8d57" event={"ID":"a4e5132f-4971-4da9-a672-d47dd175706b","Type":"ContainerStarted","Data":"ca37c68c8f6050d3f5ebf242ffd70e3d1e3d29914771489983a74f2085c0298d"} Feb 17 00:22:56 crc kubenswrapper[5109]: I0217 00:22:56.259507 5109 generic.go:358] "Generic (PLEG): container finished" podID="a4e5132f-4971-4da9-a672-d47dd175706b" containerID="36a16140c75fe6082acfd3577e39f6e14f1929bb4f29efa2fc1e8a09726485b5" exitCode=0 Feb 17 00:22:56 crc kubenswrapper[5109]: I0217 00:22:56.259695 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p8d57" event={"ID":"a4e5132f-4971-4da9-a672-d47dd175706b","Type":"ContainerDied","Data":"36a16140c75fe6082acfd3577e39f6e14f1929bb4f29efa2fc1e8a09726485b5"} Feb 17 00:22:57 crc kubenswrapper[5109]: I0217 00:22:57.267764 5109 generic.go:358] "Generic (PLEG): container finished" podID="72c6aa0c-3846-4057-be15-63d2e9e5f270" containerID="f682703236466aec9e13672010e3d0cbc7537c31fc0f18448e9ae12a2c76a90c" exitCode=0 Feb 17 00:22:57 crc kubenswrapper[5109]: I0217 00:22:57.267814 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5" event={"ID":"72c6aa0c-3846-4057-be15-63d2e9e5f270","Type":"ContainerDied","Data":"f682703236466aec9e13672010e3d0cbc7537c31fc0f18448e9ae12a2c76a90c"} Feb 17 00:22:58 crc kubenswrapper[5109]: I0217 00:22:58.290792 5109 generic.go:358] "Generic (PLEG): container finished" podID="72c6aa0c-3846-4057-be15-63d2e9e5f270" containerID="d5f3bf81880433a4104a461d2f584f862c6b4d531618cf061228135602d64e8b" exitCode=0 Feb 17 00:22:58 crc kubenswrapper[5109]: I0217 00:22:58.291394 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5" event={"ID":"72c6aa0c-3846-4057-be15-63d2e9e5f270","Type":"ContainerDied","Data":"d5f3bf81880433a4104a461d2f584f862c6b4d531618cf061228135602d64e8b"} Feb 17 00:22:58 crc kubenswrapper[5109]: I0217 00:22:58.295008 5109 generic.go:358] "Generic (PLEG): container finished" podID="a4e5132f-4971-4da9-a672-d47dd175706b" containerID="38082684eee754c3883b3942cc9351f6d67f53c684b56196ba1c241ad28353f4" exitCode=0 Feb 17 00:22:58 crc kubenswrapper[5109]: I0217 00:22:58.295081 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p8d57" event={"ID":"a4e5132f-4971-4da9-a672-d47dd175706b","Type":"ContainerDied","Data":"38082684eee754c3883b3942cc9351f6d67f53c684b56196ba1c241ad28353f4"} Feb 17 00:22:59 crc kubenswrapper[5109]: I0217 00:22:59.305811 5109 generic.go:358] "Generic (PLEG): container finished" podID="a4e5132f-4971-4da9-a672-d47dd175706b" containerID="bee09ffce0306087dac23a5f7dd0570ced3d41c13302148868522126a8ca702c" exitCode=0 Feb 17 00:22:59 crc kubenswrapper[5109]: I0217 00:22:59.305862 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p8d57" event={"ID":"a4e5132f-4971-4da9-a672-d47dd175706b","Type":"ContainerDied","Data":"bee09ffce0306087dac23a5f7dd0570ced3d41c13302148868522126a8ca702c"} Feb 17 00:22:59 crc kubenswrapper[5109]: I0217 00:22:59.629243 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5" Feb 17 00:22:59 crc kubenswrapper[5109]: I0217 00:22:59.676763 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/72c6aa0c-3846-4057-be15-63d2e9e5f270-util\") pod \"72c6aa0c-3846-4057-be15-63d2e9e5f270\" (UID: \"72c6aa0c-3846-4057-be15-63d2e9e5f270\") " Feb 17 00:22:59 crc kubenswrapper[5109]: I0217 00:22:59.676838 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/72c6aa0c-3846-4057-be15-63d2e9e5f270-bundle\") pod \"72c6aa0c-3846-4057-be15-63d2e9e5f270\" (UID: \"72c6aa0c-3846-4057-be15-63d2e9e5f270\") " Feb 17 00:22:59 crc kubenswrapper[5109]: I0217 00:22:59.676970 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wr2f7\" (UniqueName: \"kubernetes.io/projected/72c6aa0c-3846-4057-be15-63d2e9e5f270-kube-api-access-wr2f7\") pod \"72c6aa0c-3846-4057-be15-63d2e9e5f270\" (UID: \"72c6aa0c-3846-4057-be15-63d2e9e5f270\") " Feb 17 00:22:59 crc kubenswrapper[5109]: I0217 00:22:59.677675 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72c6aa0c-3846-4057-be15-63d2e9e5f270-bundle" (OuterVolumeSpecName: "bundle") pod "72c6aa0c-3846-4057-be15-63d2e9e5f270" (UID: "72c6aa0c-3846-4057-be15-63d2e9e5f270"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:22:59 crc kubenswrapper[5109]: I0217 00:22:59.684496 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72c6aa0c-3846-4057-be15-63d2e9e5f270-kube-api-access-wr2f7" (OuterVolumeSpecName: "kube-api-access-wr2f7") pod "72c6aa0c-3846-4057-be15-63d2e9e5f270" (UID: "72c6aa0c-3846-4057-be15-63d2e9e5f270"). InnerVolumeSpecName "kube-api-access-wr2f7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:22:59 crc kubenswrapper[5109]: I0217 00:22:59.697995 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72c6aa0c-3846-4057-be15-63d2e9e5f270-util" (OuterVolumeSpecName: "util") pod "72c6aa0c-3846-4057-be15-63d2e9e5f270" (UID: "72c6aa0c-3846-4057-be15-63d2e9e5f270"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:22:59 crc kubenswrapper[5109]: I0217 00:22:59.778220 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wr2f7\" (UniqueName: \"kubernetes.io/projected/72c6aa0c-3846-4057-be15-63d2e9e5f270-kube-api-access-wr2f7\") on node \"crc\" DevicePath \"\"" Feb 17 00:22:59 crc kubenswrapper[5109]: I0217 00:22:59.778270 5109 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/72c6aa0c-3846-4057-be15-63d2e9e5f270-util\") on node \"crc\" DevicePath \"\"" Feb 17 00:22:59 crc kubenswrapper[5109]: I0217 00:22:59.778290 5109 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/72c6aa0c-3846-4057-be15-63d2e9e5f270-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:00 crc kubenswrapper[5109]: I0217 00:23:00.317475 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5" Feb 17 00:23:00 crc kubenswrapper[5109]: I0217 00:23:00.317503 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5" event={"ID":"72c6aa0c-3846-4057-be15-63d2e9e5f270","Type":"ContainerDied","Data":"acd582a3a61f56523a4f945a859fa91bda4443a09096a4d790767ce286d66d57"} Feb 17 00:23:00 crc kubenswrapper[5109]: I0217 00:23:00.317573 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acd582a3a61f56523a4f945a859fa91bda4443a09096a4d790767ce286d66d57" Feb 17 00:23:00 crc kubenswrapper[5109]: I0217 00:23:00.632141 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p8d57" Feb 17 00:23:00 crc kubenswrapper[5109]: I0217 00:23:00.693266 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4e5132f-4971-4da9-a672-d47dd175706b-bundle\") pod \"a4e5132f-4971-4da9-a672-d47dd175706b\" (UID: \"a4e5132f-4971-4da9-a672-d47dd175706b\") " Feb 17 00:23:00 crc kubenswrapper[5109]: I0217 00:23:00.693436 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cg4vd\" (UniqueName: \"kubernetes.io/projected/a4e5132f-4971-4da9-a672-d47dd175706b-kube-api-access-cg4vd\") pod \"a4e5132f-4971-4da9-a672-d47dd175706b\" (UID: \"a4e5132f-4971-4da9-a672-d47dd175706b\") " Feb 17 00:23:00 crc kubenswrapper[5109]: I0217 00:23:00.693511 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4e5132f-4971-4da9-a672-d47dd175706b-util\") pod \"a4e5132f-4971-4da9-a672-d47dd175706b\" (UID: \"a4e5132f-4971-4da9-a672-d47dd175706b\") " Feb 17 00:23:00 crc kubenswrapper[5109]: I0217 00:23:00.694775 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4e5132f-4971-4da9-a672-d47dd175706b-bundle" (OuterVolumeSpecName: "bundle") pod "a4e5132f-4971-4da9-a672-d47dd175706b" (UID: "a4e5132f-4971-4da9-a672-d47dd175706b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:23:00 crc kubenswrapper[5109]: I0217 00:23:00.701050 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4e5132f-4971-4da9-a672-d47dd175706b-kube-api-access-cg4vd" (OuterVolumeSpecName: "kube-api-access-cg4vd") pod "a4e5132f-4971-4da9-a672-d47dd175706b" (UID: "a4e5132f-4971-4da9-a672-d47dd175706b"). InnerVolumeSpecName "kube-api-access-cg4vd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:23:00 crc kubenswrapper[5109]: I0217 00:23:00.782748 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4e5132f-4971-4da9-a672-d47dd175706b-util" (OuterVolumeSpecName: "util") pod "a4e5132f-4971-4da9-a672-d47dd175706b" (UID: "a4e5132f-4971-4da9-a672-d47dd175706b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:23:00 crc kubenswrapper[5109]: I0217 00:23:00.794681 5109 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4e5132f-4971-4da9-a672-d47dd175706b-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:00 crc kubenswrapper[5109]: I0217 00:23:00.794718 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cg4vd\" (UniqueName: \"kubernetes.io/projected/a4e5132f-4971-4da9-a672-d47dd175706b-kube-api-access-cg4vd\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:00 crc kubenswrapper[5109]: I0217 00:23:00.794731 5109 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4e5132f-4971-4da9-a672-d47dd175706b-util\") on node \"crc\" DevicePath \"\"" Feb 17 00:23:01 crc kubenswrapper[5109]: I0217 00:23:01.328142 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p8d57" event={"ID":"a4e5132f-4971-4da9-a672-d47dd175706b","Type":"ContainerDied","Data":"ca37c68c8f6050d3f5ebf242ffd70e3d1e3d29914771489983a74f2085c0298d"} Feb 17 00:23:01 crc kubenswrapper[5109]: I0217 00:23:01.328186 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca37c68c8f6050d3f5ebf242ffd70e3d1e3d29914771489983a74f2085c0298d" Feb 17 00:23:01 crc kubenswrapper[5109]: I0217 00:23:01.328293 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572p8d57" Feb 17 00:23:07 crc kubenswrapper[5109]: I0217 00:23:07.472389 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-794b5697c7-74dvh"] Feb 17 00:23:07 crc kubenswrapper[5109]: I0217 00:23:07.473324 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="72c6aa0c-3846-4057-be15-63d2e9e5f270" containerName="util" Feb 17 00:23:07 crc kubenswrapper[5109]: I0217 00:23:07.473335 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c6aa0c-3846-4057-be15-63d2e9e5f270" containerName="util" Feb 17 00:23:07 crc kubenswrapper[5109]: I0217 00:23:07.473355 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="72c6aa0c-3846-4057-be15-63d2e9e5f270" containerName="pull" Feb 17 00:23:07 crc kubenswrapper[5109]: I0217 00:23:07.473362 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c6aa0c-3846-4057-be15-63d2e9e5f270" containerName="pull" Feb 17 00:23:07 crc kubenswrapper[5109]: I0217 00:23:07.473375 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="72c6aa0c-3846-4057-be15-63d2e9e5f270" containerName="extract" Feb 17 00:23:07 crc kubenswrapper[5109]: I0217 00:23:07.473381 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c6aa0c-3846-4057-be15-63d2e9e5f270" containerName="extract" Feb 17 00:23:07 crc kubenswrapper[5109]: I0217 00:23:07.473395 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4e5132f-4971-4da9-a672-d47dd175706b" containerName="pull" Feb 17 00:23:07 crc kubenswrapper[5109]: I0217 00:23:07.473400 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4e5132f-4971-4da9-a672-d47dd175706b" containerName="pull" Feb 17 00:23:07 crc kubenswrapper[5109]: I0217 00:23:07.473406 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4e5132f-4971-4da9-a672-d47dd175706b" containerName="extract" Feb 17 00:23:07 crc kubenswrapper[5109]: I0217 00:23:07.473411 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4e5132f-4971-4da9-a672-d47dd175706b" containerName="extract" Feb 17 00:23:07 crc kubenswrapper[5109]: I0217 00:23:07.473425 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4e5132f-4971-4da9-a672-d47dd175706b" containerName="util" Feb 17 00:23:07 crc kubenswrapper[5109]: I0217 00:23:07.473429 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4e5132f-4971-4da9-a672-d47dd175706b" containerName="util" Feb 17 00:23:07 crc kubenswrapper[5109]: I0217 00:23:07.473515 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="a4e5132f-4971-4da9-a672-d47dd175706b" containerName="extract" Feb 17 00:23:07 crc kubenswrapper[5109]: I0217 00:23:07.473526 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="72c6aa0c-3846-4057-be15-63d2e9e5f270" containerName="extract" Feb 17 00:23:07 crc kubenswrapper[5109]: I0217 00:23:07.476635 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-794b5697c7-74dvh"] Feb 17 00:23:07 crc kubenswrapper[5109]: I0217 00:23:07.476725 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-794b5697c7-74dvh" Feb 17 00:23:07 crc kubenswrapper[5109]: I0217 00:23:07.532448 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-dockercfg-2bfcx\"" Feb 17 00:23:07 crc kubenswrapper[5109]: I0217 00:23:07.633152 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v4sf\" (UniqueName: \"kubernetes.io/projected/af5ebf50-c291-4382-a449-c201099497ad-kube-api-access-6v4sf\") pod \"service-telemetry-operator-794b5697c7-74dvh\" (UID: \"af5ebf50-c291-4382-a449-c201099497ad\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-74dvh" Feb 17 00:23:07 crc kubenswrapper[5109]: I0217 00:23:07.633224 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/af5ebf50-c291-4382-a449-c201099497ad-runner\") pod \"service-telemetry-operator-794b5697c7-74dvh\" (UID: \"af5ebf50-c291-4382-a449-c201099497ad\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-74dvh" Feb 17 00:23:07 crc kubenswrapper[5109]: I0217 00:23:07.734395 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6v4sf\" (UniqueName: \"kubernetes.io/projected/af5ebf50-c291-4382-a449-c201099497ad-kube-api-access-6v4sf\") pod \"service-telemetry-operator-794b5697c7-74dvh\" (UID: \"af5ebf50-c291-4382-a449-c201099497ad\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-74dvh" Feb 17 00:23:07 crc kubenswrapper[5109]: I0217 00:23:07.734445 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/af5ebf50-c291-4382-a449-c201099497ad-runner\") pod \"service-telemetry-operator-794b5697c7-74dvh\" (UID: \"af5ebf50-c291-4382-a449-c201099497ad\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-74dvh" Feb 17 00:23:07 crc kubenswrapper[5109]: I0217 00:23:07.734937 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/af5ebf50-c291-4382-a449-c201099497ad-runner\") pod \"service-telemetry-operator-794b5697c7-74dvh\" (UID: \"af5ebf50-c291-4382-a449-c201099497ad\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-74dvh" Feb 17 00:23:07 crc kubenswrapper[5109]: I0217 00:23:07.750928 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v4sf\" (UniqueName: \"kubernetes.io/projected/af5ebf50-c291-4382-a449-c201099497ad-kube-api-access-6v4sf\") pod \"service-telemetry-operator-794b5697c7-74dvh\" (UID: \"af5ebf50-c291-4382-a449-c201099497ad\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-74dvh" Feb 17 00:23:07 crc kubenswrapper[5109]: I0217 00:23:07.845477 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-794b5697c7-74dvh" Feb 17 00:23:08 crc kubenswrapper[5109]: I0217 00:23:08.257692 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-794b5697c7-74dvh"] Feb 17 00:23:08 crc kubenswrapper[5109]: I0217 00:23:08.382659 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-794b5697c7-74dvh" event={"ID":"af5ebf50-c291-4382-a449-c201099497ad","Type":"ContainerStarted","Data":"dce782f73aa370fee894eb4ab96e9b6a8d318a0328156c9f83c229ef4cb6c84b"} Feb 17 00:23:09 crc kubenswrapper[5109]: I0217 00:23:09.312797 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-jbws9"] Feb 17 00:23:09 crc kubenswrapper[5109]: I0217 00:23:09.415271 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-jbws9"] Feb 17 00:23:09 crc kubenswrapper[5109]: I0217 00:23:09.415420 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-jbws9" Feb 17 00:23:09 crc kubenswrapper[5109]: I0217 00:23:09.417165 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-r6bsn\"" Feb 17 00:23:09 crc kubenswrapper[5109]: I0217 00:23:09.471963 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg4zv\" (UniqueName: \"kubernetes.io/projected/ec0b729b-9a17-4d33-b500-e0e3622378e2-kube-api-access-cg4zv\") pod \"interconnect-operator-78b9bd8798-jbws9\" (UID: \"ec0b729b-9a17-4d33-b500-e0e3622378e2\") " pod="service-telemetry/interconnect-operator-78b9bd8798-jbws9" Feb 17 00:23:09 crc kubenswrapper[5109]: I0217 00:23:09.573189 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cg4zv\" (UniqueName: \"kubernetes.io/projected/ec0b729b-9a17-4d33-b500-e0e3622378e2-kube-api-access-cg4zv\") pod \"interconnect-operator-78b9bd8798-jbws9\" (UID: \"ec0b729b-9a17-4d33-b500-e0e3622378e2\") " pod="service-telemetry/interconnect-operator-78b9bd8798-jbws9" Feb 17 00:23:09 crc kubenswrapper[5109]: I0217 00:23:09.596880 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg4zv\" (UniqueName: \"kubernetes.io/projected/ec0b729b-9a17-4d33-b500-e0e3622378e2-kube-api-access-cg4zv\") pod \"interconnect-operator-78b9bd8798-jbws9\" (UID: \"ec0b729b-9a17-4d33-b500-e0e3622378e2\") " pod="service-telemetry/interconnect-operator-78b9bd8798-jbws9" Feb 17 00:23:09 crc kubenswrapper[5109]: I0217 00:23:09.734228 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-jbws9" Feb 17 00:23:10 crc kubenswrapper[5109]: I0217 00:23:10.156080 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-jbws9"] Feb 17 00:23:10 crc kubenswrapper[5109]: W0217 00:23:10.169814 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec0b729b_9a17_4d33_b500_e0e3622378e2.slice/crio-2d4baae182c22466bcc97b1618fda99048113424e13c48fe9b6974f61159e96e WatchSource:0}: Error finding container 2d4baae182c22466bcc97b1618fda99048113424e13c48fe9b6974f61159e96e: Status 404 returned error can't find the container with id 2d4baae182c22466bcc97b1618fda99048113424e13c48fe9b6974f61159e96e Feb 17 00:23:10 crc kubenswrapper[5109]: I0217 00:23:10.401944 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-jbws9" event={"ID":"ec0b729b-9a17-4d33-b500-e0e3622378e2","Type":"ContainerStarted","Data":"2d4baae182c22466bcc97b1618fda99048113424e13c48fe9b6974f61159e96e"} Feb 17 00:23:15 crc kubenswrapper[5109]: I0217 00:23:15.437029 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-794b5697c7-74dvh" event={"ID":"af5ebf50-c291-4382-a449-c201099497ad","Type":"ContainerStarted","Data":"6782800123a3d9529ddb04ba1763f2fc5a279d74cd5b4fec0c3df740ca0fc23b"} Feb 17 00:23:15 crc kubenswrapper[5109]: I0217 00:23:15.460442 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-794b5697c7-74dvh" podStartSLOduration=1.86581513 podStartE2EDuration="8.4604238s" podCreationTimestamp="2026-02-17 00:23:07 +0000 UTC" firstStartedPulling="2026-02-17 00:23:08.265462125 +0000 UTC m=+859.597016893" lastFinishedPulling="2026-02-17 00:23:14.860070795 +0000 UTC m=+866.191625563" observedRunningTime="2026-02-17 00:23:15.458357495 +0000 UTC m=+866.789912283" watchObservedRunningTime="2026-02-17 00:23:15.4604238 +0000 UTC m=+866.791978558" Feb 17 00:23:21 crc kubenswrapper[5109]: I0217 00:23:21.504010 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-jbws9" event={"ID":"ec0b729b-9a17-4d33-b500-e0e3622378e2","Type":"ContainerStarted","Data":"b6081190cc7460fd27b08084917bbf4e871ce77c71bb12d85e800f8805f2e188"} Feb 17 00:23:21 crc kubenswrapper[5109]: I0217 00:23:21.520619 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-jbws9" podStartSLOduration=2.373461535 podStartE2EDuration="12.520577391s" podCreationTimestamp="2026-02-17 00:23:09 +0000 UTC" firstStartedPulling="2026-02-17 00:23:10.171308867 +0000 UTC m=+861.502863625" lastFinishedPulling="2026-02-17 00:23:20.318424703 +0000 UTC m=+871.649979481" observedRunningTime="2026-02-17 00:23:21.518530607 +0000 UTC m=+872.850085455" watchObservedRunningTime="2026-02-17 00:23:21.520577391 +0000 UTC m=+872.852132149" Feb 17 00:23:30 crc kubenswrapper[5109]: I0217 00:23:30.800688 5109 patch_prober.go:28] interesting pod/machine-config-daemon-hjvm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:23:30 crc kubenswrapper[5109]: I0217 00:23:30.801410 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" podUID="5867f26a-eddd-4d0b-bfa3-e7c68e976330" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:23:36 crc kubenswrapper[5109]: I0217 00:23:36.494197 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-wm65j"] Feb 17 00:23:36 crc kubenswrapper[5109]: I0217 00:23:36.517017 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-wm65j" Feb 17 00:23:36 crc kubenswrapper[5109]: I0217 00:23:36.523212 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-ca\"" Feb 17 00:23:36 crc kubenswrapper[5109]: I0217 00:23:36.523532 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-dockercfg-kf628\"" Feb 17 00:23:36 crc kubenswrapper[5109]: I0217 00:23:36.523807 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-users\"" Feb 17 00:23:36 crc kubenswrapper[5109]: I0217 00:23:36.523984 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-interconnect-sasl-config\"" Feb 17 00:23:36 crc kubenswrapper[5109]: I0217 00:23:36.524203 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-credentials\"" Feb 17 00:23:36 crc kubenswrapper[5109]: I0217 00:23:36.524390 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-ca\"" Feb 17 00:23:36 crc kubenswrapper[5109]: I0217 00:23:36.525689 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-credentials\"" Feb 17 00:23:36 crc kubenswrapper[5109]: I0217 00:23:36.546437 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-wm65j"] Feb 17 00:23:36 crc kubenswrapper[5109]: I0217 00:23:36.583663 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/d8eb5216-1a79-41e6-bda2-411d232e6e56-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-wm65j\" (UID: \"d8eb5216-1a79-41e6-bda2-411d232e6e56\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wm65j" Feb 17 00:23:36 crc kubenswrapper[5109]: I0217 00:23:36.583705 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/d8eb5216-1a79-41e6-bda2-411d232e6e56-sasl-config\") pod \"default-interconnect-55bf8d5cb-wm65j\" (UID: \"d8eb5216-1a79-41e6-bda2-411d232e6e56\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wm65j" Feb 17 00:23:36 crc kubenswrapper[5109]: I0217 00:23:36.583748 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/d8eb5216-1a79-41e6-bda2-411d232e6e56-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-wm65j\" (UID: \"d8eb5216-1a79-41e6-bda2-411d232e6e56\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wm65j" Feb 17 00:23:36 crc kubenswrapper[5109]: I0217 00:23:36.583770 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/d8eb5216-1a79-41e6-bda2-411d232e6e56-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-wm65j\" (UID: \"d8eb5216-1a79-41e6-bda2-411d232e6e56\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wm65j" Feb 17 00:23:36 crc kubenswrapper[5109]: I0217 00:23:36.583788 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/d8eb5216-1a79-41e6-bda2-411d232e6e56-sasl-users\") pod \"default-interconnect-55bf8d5cb-wm65j\" (UID: \"d8eb5216-1a79-41e6-bda2-411d232e6e56\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wm65j" Feb 17 00:23:36 crc kubenswrapper[5109]: I0217 00:23:36.583802 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/d8eb5216-1a79-41e6-bda2-411d232e6e56-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-wm65j\" (UID: \"d8eb5216-1a79-41e6-bda2-411d232e6e56\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wm65j" Feb 17 00:23:36 crc kubenswrapper[5109]: I0217 00:23:36.583836 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8fph\" (UniqueName: \"kubernetes.io/projected/d8eb5216-1a79-41e6-bda2-411d232e6e56-kube-api-access-p8fph\") pod \"default-interconnect-55bf8d5cb-wm65j\" (UID: \"d8eb5216-1a79-41e6-bda2-411d232e6e56\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wm65j" Feb 17 00:23:36 crc kubenswrapper[5109]: I0217 00:23:36.685105 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/d8eb5216-1a79-41e6-bda2-411d232e6e56-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-wm65j\" (UID: \"d8eb5216-1a79-41e6-bda2-411d232e6e56\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wm65j" Feb 17 00:23:36 crc kubenswrapper[5109]: I0217 00:23:36.685176 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/d8eb5216-1a79-41e6-bda2-411d232e6e56-sasl-config\") pod \"default-interconnect-55bf8d5cb-wm65j\" (UID: \"d8eb5216-1a79-41e6-bda2-411d232e6e56\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wm65j" Feb 17 00:23:36 crc kubenswrapper[5109]: I0217 00:23:36.685263 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/d8eb5216-1a79-41e6-bda2-411d232e6e56-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-wm65j\" (UID: \"d8eb5216-1a79-41e6-bda2-411d232e6e56\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wm65j" Feb 17 00:23:36 crc kubenswrapper[5109]: I0217 00:23:36.685309 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/d8eb5216-1a79-41e6-bda2-411d232e6e56-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-wm65j\" (UID: \"d8eb5216-1a79-41e6-bda2-411d232e6e56\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wm65j" Feb 17 00:23:36 crc kubenswrapper[5109]: I0217 00:23:36.685344 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/d8eb5216-1a79-41e6-bda2-411d232e6e56-sasl-users\") pod \"default-interconnect-55bf8d5cb-wm65j\" (UID: \"d8eb5216-1a79-41e6-bda2-411d232e6e56\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wm65j" Feb 17 00:23:36 crc kubenswrapper[5109]: I0217 00:23:36.685375 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/d8eb5216-1a79-41e6-bda2-411d232e6e56-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-wm65j\" (UID: \"d8eb5216-1a79-41e6-bda2-411d232e6e56\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wm65j" Feb 17 00:23:36 crc kubenswrapper[5109]: I0217 00:23:36.685448 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p8fph\" (UniqueName: \"kubernetes.io/projected/d8eb5216-1a79-41e6-bda2-411d232e6e56-kube-api-access-p8fph\") pod \"default-interconnect-55bf8d5cb-wm65j\" (UID: \"d8eb5216-1a79-41e6-bda2-411d232e6e56\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wm65j" Feb 17 00:23:36 crc kubenswrapper[5109]: I0217 00:23:36.687048 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/d8eb5216-1a79-41e6-bda2-411d232e6e56-sasl-config\") pod \"default-interconnect-55bf8d5cb-wm65j\" (UID: \"d8eb5216-1a79-41e6-bda2-411d232e6e56\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wm65j" Feb 17 00:23:36 crc kubenswrapper[5109]: I0217 00:23:36.692629 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/d8eb5216-1a79-41e6-bda2-411d232e6e56-sasl-users\") pod \"default-interconnect-55bf8d5cb-wm65j\" (UID: \"d8eb5216-1a79-41e6-bda2-411d232e6e56\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wm65j" Feb 17 00:23:36 crc kubenswrapper[5109]: I0217 00:23:36.692726 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/d8eb5216-1a79-41e6-bda2-411d232e6e56-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-wm65j\" (UID: \"d8eb5216-1a79-41e6-bda2-411d232e6e56\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wm65j" Feb 17 00:23:36 crc kubenswrapper[5109]: I0217 00:23:36.693351 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/d8eb5216-1a79-41e6-bda2-411d232e6e56-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-wm65j\" (UID: \"d8eb5216-1a79-41e6-bda2-411d232e6e56\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wm65j" Feb 17 00:23:36 crc kubenswrapper[5109]: I0217 00:23:36.702610 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/d8eb5216-1a79-41e6-bda2-411d232e6e56-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-wm65j\" (UID: \"d8eb5216-1a79-41e6-bda2-411d232e6e56\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wm65j" Feb 17 00:23:36 crc kubenswrapper[5109]: I0217 00:23:36.703108 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/d8eb5216-1a79-41e6-bda2-411d232e6e56-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-wm65j\" (UID: \"d8eb5216-1a79-41e6-bda2-411d232e6e56\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wm65j" Feb 17 00:23:36 crc kubenswrapper[5109]: I0217 00:23:36.709376 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8fph\" (UniqueName: \"kubernetes.io/projected/d8eb5216-1a79-41e6-bda2-411d232e6e56-kube-api-access-p8fph\") pod \"default-interconnect-55bf8d5cb-wm65j\" (UID: \"d8eb5216-1a79-41e6-bda2-411d232e6e56\") " pod="service-telemetry/default-interconnect-55bf8d5cb-wm65j" Feb 17 00:23:36 crc kubenswrapper[5109]: I0217 00:23:36.845125 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-wm65j" Feb 17 00:23:37 crc kubenswrapper[5109]: I0217 00:23:37.359235 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-wm65j"] Feb 17 00:23:37 crc kubenswrapper[5109]: I0217 00:23:37.644512 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-wm65j" event={"ID":"d8eb5216-1a79-41e6-bda2-411d232e6e56","Type":"ContainerStarted","Data":"bb53c9629b34c93de50a3c9ef55a3e930ab59444ba36d389c44ac3b746d49ce0"} Feb 17 00:23:42 crc kubenswrapper[5109]: I0217 00:23:42.677238 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-wm65j" event={"ID":"d8eb5216-1a79-41e6-bda2-411d232e6e56","Type":"ContainerStarted","Data":"44baa2c92d2aee955a267a7e7725f98c860d04a6774fba57927abad0ec28552d"} Feb 17 00:23:42 crc kubenswrapper[5109]: I0217 00:23:42.711301 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-wm65j" podStartSLOduration=2.007332099 podStartE2EDuration="6.711272238s" podCreationTimestamp="2026-02-17 00:23:36 +0000 UTC" firstStartedPulling="2026-02-17 00:23:37.371635546 +0000 UTC m=+888.703190344" lastFinishedPulling="2026-02-17 00:23:42.075575725 +0000 UTC m=+893.407130483" observedRunningTime="2026-02-17 00:23:42.696959018 +0000 UTC m=+894.028513786" watchObservedRunningTime="2026-02-17 00:23:42.711272238 +0000 UTC m=+894.042827036" Feb 17 00:23:46 crc kubenswrapper[5109]: I0217 00:23:46.721510 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.151998 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.152166 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.154464 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-session-secret\"" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.155280 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-1\"" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.155504 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-0\"" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.155433 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-web-config\"" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.155479 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-prometheus-proxy-tls\"" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.155649 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-stf-dockercfg-l6pwh\"" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.156397 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default\"" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.160062 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-2\"" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.160127 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-tls-assets-0\"" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.170653 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"serving-certs-ca-bundle\"" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.242321 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45btz\" (UniqueName: \"kubernetes.io/projected/d92b9849-5e99-4939-abb4-27b6fe87adb3-kube-api-access-45btz\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.242421 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d92b9849-5e99-4939-abb4-27b6fe87adb3-config\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.242480 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d92b9849-5e99-4939-abb4-27b6fe87adb3-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.242577 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d92b9849-5e99-4939-abb4-27b6fe87adb3-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.242681 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d92b9849-5e99-4939-abb4-27b6fe87adb3-tls-assets\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.242716 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d92b9849-5e99-4939-abb4-27b6fe87adb3-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.242759 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-34b90f3d-eca6-4d84-9969-347e2598f7dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-34b90f3d-eca6-4d84-9969-347e2598f7dc\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.242804 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d92b9849-5e99-4939-abb4-27b6fe87adb3-config-out\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.242835 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d92b9849-5e99-4939-abb4-27b6fe87adb3-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.242867 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d92b9849-5e99-4939-abb4-27b6fe87adb3-web-config\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.242911 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d92b9849-5e99-4939-abb4-27b6fe87adb3-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.243158 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/d92b9849-5e99-4939-abb4-27b6fe87adb3-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.344525 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/d92b9849-5e99-4939-abb4-27b6fe87adb3-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.344633 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-45btz\" (UniqueName: \"kubernetes.io/projected/d92b9849-5e99-4939-abb4-27b6fe87adb3-kube-api-access-45btz\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.344694 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d92b9849-5e99-4939-abb4-27b6fe87adb3-config\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.344744 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d92b9849-5e99-4939-abb4-27b6fe87adb3-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.344832 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d92b9849-5e99-4939-abb4-27b6fe87adb3-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.344869 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d92b9849-5e99-4939-abb4-27b6fe87adb3-tls-assets\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.344898 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d92b9849-5e99-4939-abb4-27b6fe87adb3-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.344941 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-34b90f3d-eca6-4d84-9969-347e2598f7dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-34b90f3d-eca6-4d84-9969-347e2598f7dc\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.344985 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d92b9849-5e99-4939-abb4-27b6fe87adb3-config-out\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.345013 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d92b9849-5e99-4939-abb4-27b6fe87adb3-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.345046 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d92b9849-5e99-4939-abb4-27b6fe87adb3-web-config\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.345086 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d92b9849-5e99-4939-abb4-27b6fe87adb3-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.346227 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d92b9849-5e99-4939-abb4-27b6fe87adb3-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: E0217 00:23:47.347122 5109 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Feb 17 00:23:47 crc kubenswrapper[5109]: E0217 00:23:47.347285 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d92b9849-5e99-4939-abb4-27b6fe87adb3-secret-default-prometheus-proxy-tls podName:d92b9849-5e99-4939-abb4-27b6fe87adb3 nodeName:}" failed. No retries permitted until 2026-02-17 00:23:47.847244875 +0000 UTC m=+899.178799703 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/d92b9849-5e99-4939-abb4-27b6fe87adb3-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "d92b9849-5e99-4939-abb4-27b6fe87adb3") : secret "default-prometheus-proxy-tls" not found Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.349157 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d92b9849-5e99-4939-abb4-27b6fe87adb3-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.349691 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d92b9849-5e99-4939-abb4-27b6fe87adb3-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.351469 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d92b9849-5e99-4939-abb4-27b6fe87adb3-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.354447 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d92b9849-5e99-4939-abb4-27b6fe87adb3-tls-assets\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.354663 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d92b9849-5e99-4939-abb4-27b6fe87adb3-config-out\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.355166 5109 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.355372 5109 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-34b90f3d-eca6-4d84-9969-347e2598f7dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-34b90f3d-eca6-4d84-9969-347e2598f7dc\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/32b803e7ebbef24ff54e63af46e4365f978dfef5b58e735ca08b7af28badde6b/globalmount\"" pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.355512 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d92b9849-5e99-4939-abb4-27b6fe87adb3-web-config\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.359787 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/d92b9849-5e99-4939-abb4-27b6fe87adb3-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.365636 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d92b9849-5e99-4939-abb4-27b6fe87adb3-config\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.370415 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-45btz\" (UniqueName: \"kubernetes.io/projected/d92b9849-5e99-4939-abb4-27b6fe87adb3-kube-api-access-45btz\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.388866 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-34b90f3d-eca6-4d84-9969-347e2598f7dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-34b90f3d-eca6-4d84-9969-347e2598f7dc\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: I0217 00:23:47.852635 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d92b9849-5e99-4939-abb4-27b6fe87adb3-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:47 crc kubenswrapper[5109]: E0217 00:23:47.853270 5109 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Feb 17 00:23:47 crc kubenswrapper[5109]: E0217 00:23:47.853637 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d92b9849-5e99-4939-abb4-27b6fe87adb3-secret-default-prometheus-proxy-tls podName:d92b9849-5e99-4939-abb4-27b6fe87adb3 nodeName:}" failed. No retries permitted until 2026-02-17 00:23:48.853583838 +0000 UTC m=+900.185138626 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/d92b9849-5e99-4939-abb4-27b6fe87adb3-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "d92b9849-5e99-4939-abb4-27b6fe87adb3") : secret "default-prometheus-proxy-tls" not found Feb 17 00:23:48 crc kubenswrapper[5109]: I0217 00:23:48.869946 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d92b9849-5e99-4939-abb4-27b6fe87adb3-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:48 crc kubenswrapper[5109]: I0217 00:23:48.878581 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d92b9849-5e99-4939-abb4-27b6fe87adb3-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"d92b9849-5e99-4939-abb4-27b6fe87adb3\") " pod="service-telemetry/prometheus-default-0" Feb 17 00:23:48 crc kubenswrapper[5109]: I0217 00:23:48.997500 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Feb 17 00:23:49 crc kubenswrapper[5109]: I0217 00:23:49.562929 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 17 00:23:49 crc kubenswrapper[5109]: W0217 00:23:49.577205 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd92b9849_5e99_4939_abb4_27b6fe87adb3.slice/crio-53ce0caaf3506eee892567b976892ed2e90543aec4b74b6c7cebb4abb5cd82a3 WatchSource:0}: Error finding container 53ce0caaf3506eee892567b976892ed2e90543aec4b74b6c7cebb4abb5cd82a3: Status 404 returned error can't find the container with id 53ce0caaf3506eee892567b976892ed2e90543aec4b74b6c7cebb4abb5cd82a3 Feb 17 00:23:49 crc kubenswrapper[5109]: I0217 00:23:49.735041 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"d92b9849-5e99-4939-abb4-27b6fe87adb3","Type":"ContainerStarted","Data":"53ce0caaf3506eee892567b976892ed2e90543aec4b74b6c7cebb4abb5cd82a3"} Feb 17 00:23:49 crc kubenswrapper[5109]: I0217 00:23:49.898217 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bbh4j_a1a466bd-accd-4381-b1f0-357d6e20410e/kube-multus/0.log" Feb 17 00:23:49 crc kubenswrapper[5109]: I0217 00:23:49.904700 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bbh4j_a1a466bd-accd-4381-b1f0-357d6e20410e/kube-multus/0.log" Feb 17 00:23:49 crc kubenswrapper[5109]: I0217 00:23:49.907149 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 17 00:23:49 crc kubenswrapper[5109]: I0217 00:23:49.910678 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 17 00:23:54 crc kubenswrapper[5109]: I0217 00:23:54.774683 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"d92b9849-5e99-4939-abb4-27b6fe87adb3","Type":"ContainerStarted","Data":"7517c2663347e23672ad4a994da1816683997a1ad1861c35080e8702b33a5d0e"} Feb 17 00:23:57 crc kubenswrapper[5109]: I0217 00:23:57.477207 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-pp685"] Feb 17 00:23:57 crc kubenswrapper[5109]: I0217 00:23:57.487487 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-pp685"] Feb 17 00:23:57 crc kubenswrapper[5109]: I0217 00:23:57.487660 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-pp685" Feb 17 00:23:57 crc kubenswrapper[5109]: I0217 00:23:57.612843 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ptdj\" (UniqueName: \"kubernetes.io/projected/192a2b06-be82-48ba-ba35-bb94271c00a4-kube-api-access-6ptdj\") pod \"default-snmp-webhook-6774d8dfbc-pp685\" (UID: \"192a2b06-be82-48ba-ba35-bb94271c00a4\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-pp685" Feb 17 00:23:57 crc kubenswrapper[5109]: I0217 00:23:57.715009 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6ptdj\" (UniqueName: \"kubernetes.io/projected/192a2b06-be82-48ba-ba35-bb94271c00a4-kube-api-access-6ptdj\") pod \"default-snmp-webhook-6774d8dfbc-pp685\" (UID: \"192a2b06-be82-48ba-ba35-bb94271c00a4\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-pp685" Feb 17 00:23:57 crc kubenswrapper[5109]: I0217 00:23:57.750498 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ptdj\" (UniqueName: \"kubernetes.io/projected/192a2b06-be82-48ba-ba35-bb94271c00a4-kube-api-access-6ptdj\") pod \"default-snmp-webhook-6774d8dfbc-pp685\" (UID: \"192a2b06-be82-48ba-ba35-bb94271c00a4\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-pp685" Feb 17 00:23:57 crc kubenswrapper[5109]: I0217 00:23:57.816787 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-pp685" Feb 17 00:23:58 crc kubenswrapper[5109]: I0217 00:23:58.088198 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-pp685"] Feb 17 00:23:58 crc kubenswrapper[5109]: I0217 00:23:58.807695 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-pp685" event={"ID":"192a2b06-be82-48ba-ba35-bb94271c00a4","Type":"ContainerStarted","Data":"444eebb91ab647bccaa19a23c1b39aad90de2400f94761ed0d82d0df36cab66f"} Feb 17 00:24:00 crc kubenswrapper[5109]: I0217 00:24:00.120067 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29521464-pzhn2"] Feb 17 00:24:00 crc kubenswrapper[5109]: I0217 00:24:00.140097 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521464-pzhn2" Feb 17 00:24:00 crc kubenswrapper[5109]: I0217 00:24:00.141140 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29521464-pzhn2"] Feb 17 00:24:00 crc kubenswrapper[5109]: I0217 00:24:00.143635 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 17 00:24:00 crc kubenswrapper[5109]: I0217 00:24:00.143967 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-r4lwp\"" Feb 17 00:24:00 crc kubenswrapper[5109]: I0217 00:24:00.144086 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 17 00:24:00 crc kubenswrapper[5109]: I0217 00:24:00.255330 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v75rw\" (UniqueName: \"kubernetes.io/projected/f09f79a5-2bda-4621-97cc-fb9a19d50348-kube-api-access-v75rw\") pod \"auto-csr-approver-29521464-pzhn2\" (UID: \"f09f79a5-2bda-4621-97cc-fb9a19d50348\") " pod="openshift-infra/auto-csr-approver-29521464-pzhn2" Feb 17 00:24:00 crc kubenswrapper[5109]: I0217 00:24:00.356565 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v75rw\" (UniqueName: \"kubernetes.io/projected/f09f79a5-2bda-4621-97cc-fb9a19d50348-kube-api-access-v75rw\") pod \"auto-csr-approver-29521464-pzhn2\" (UID: \"f09f79a5-2bda-4621-97cc-fb9a19d50348\") " pod="openshift-infra/auto-csr-approver-29521464-pzhn2" Feb 17 00:24:00 crc kubenswrapper[5109]: I0217 00:24:00.378715 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v75rw\" (UniqueName: \"kubernetes.io/projected/f09f79a5-2bda-4621-97cc-fb9a19d50348-kube-api-access-v75rw\") pod \"auto-csr-approver-29521464-pzhn2\" (UID: \"f09f79a5-2bda-4621-97cc-fb9a19d50348\") " pod="openshift-infra/auto-csr-approver-29521464-pzhn2" Feb 17 00:24:00 crc kubenswrapper[5109]: I0217 00:24:00.460355 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521464-pzhn2" Feb 17 00:24:00 crc kubenswrapper[5109]: I0217 00:24:00.800210 5109 patch_prober.go:28] interesting pod/machine-config-daemon-hjvm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:24:00 crc kubenswrapper[5109]: I0217 00:24:00.800273 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" podUID="5867f26a-eddd-4d0b-bfa3-e7c68e976330" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:24:00 crc kubenswrapper[5109]: I0217 00:24:00.859470 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 17 00:24:00 crc kubenswrapper[5109]: I0217 00:24:00.906656 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 17 00:24:00 crc kubenswrapper[5109]: I0217 00:24:00.906803 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:00 crc kubenswrapper[5109]: I0217 00:24:00.910959 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-tls-assets-0\"" Feb 17 00:24:00 crc kubenswrapper[5109]: I0217 00:24:00.911186 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-stf-dockercfg-42vlm\"" Feb 17 00:24:00 crc kubenswrapper[5109]: I0217 00:24:00.911231 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-alertmanager-proxy-tls\"" Feb 17 00:24:00 crc kubenswrapper[5109]: I0217 00:24:00.911259 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-generated\"" Feb 17 00:24:00 crc kubenswrapper[5109]: I0217 00:24:00.911450 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-cluster-tls-config\"" Feb 17 00:24:00 crc kubenswrapper[5109]: I0217 00:24:00.911556 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-web-config\"" Feb 17 00:24:01 crc kubenswrapper[5109]: I0217 00:24:01.065219 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/637374b2-565d-4fbc-ba62-e151d5fef990-web-config\") pod \"alertmanager-default-0\" (UID: \"637374b2-565d-4fbc-ba62-e151d5fef990\") " pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:01 crc kubenswrapper[5109]: I0217 00:24:01.065272 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/637374b2-565d-4fbc-ba62-e151d5fef990-config-volume\") pod \"alertmanager-default-0\" (UID: \"637374b2-565d-4fbc-ba62-e151d5fef990\") " pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:01 crc kubenswrapper[5109]: I0217 00:24:01.065290 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/637374b2-565d-4fbc-ba62-e151d5fef990-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"637374b2-565d-4fbc-ba62-e151d5fef990\") " pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:01 crc kubenswrapper[5109]: I0217 00:24:01.065327 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/637374b2-565d-4fbc-ba62-e151d5fef990-tls-assets\") pod \"alertmanager-default-0\" (UID: \"637374b2-565d-4fbc-ba62-e151d5fef990\") " pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:01 crc kubenswrapper[5109]: I0217 00:24:01.065344 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8274af7b-78b9-4a5d-bcf0-b95aefac60e6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8274af7b-78b9-4a5d-bcf0-b95aefac60e6\") pod \"alertmanager-default-0\" (UID: \"637374b2-565d-4fbc-ba62-e151d5fef990\") " pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:01 crc kubenswrapper[5109]: I0217 00:24:01.065368 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/637374b2-565d-4fbc-ba62-e151d5fef990-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"637374b2-565d-4fbc-ba62-e151d5fef990\") " pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:01 crc kubenswrapper[5109]: I0217 00:24:01.065406 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pcfx\" (UniqueName: \"kubernetes.io/projected/637374b2-565d-4fbc-ba62-e151d5fef990-kube-api-access-4pcfx\") pod \"alertmanager-default-0\" (UID: \"637374b2-565d-4fbc-ba62-e151d5fef990\") " pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:01 crc kubenswrapper[5109]: I0217 00:24:01.065427 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/637374b2-565d-4fbc-ba62-e151d5fef990-config-out\") pod \"alertmanager-default-0\" (UID: \"637374b2-565d-4fbc-ba62-e151d5fef990\") " pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:01 crc kubenswrapper[5109]: I0217 00:24:01.065469 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/637374b2-565d-4fbc-ba62-e151d5fef990-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"637374b2-565d-4fbc-ba62-e151d5fef990\") " pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:01 crc kubenswrapper[5109]: I0217 00:24:01.166282 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/637374b2-565d-4fbc-ba62-e151d5fef990-web-config\") pod \"alertmanager-default-0\" (UID: \"637374b2-565d-4fbc-ba62-e151d5fef990\") " pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:01 crc kubenswrapper[5109]: I0217 00:24:01.166337 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/637374b2-565d-4fbc-ba62-e151d5fef990-config-volume\") pod \"alertmanager-default-0\" (UID: \"637374b2-565d-4fbc-ba62-e151d5fef990\") " pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:01 crc kubenswrapper[5109]: I0217 00:24:01.166672 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/637374b2-565d-4fbc-ba62-e151d5fef990-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"637374b2-565d-4fbc-ba62-e151d5fef990\") " pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:01 crc kubenswrapper[5109]: I0217 00:24:01.166801 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/637374b2-565d-4fbc-ba62-e151d5fef990-tls-assets\") pod \"alertmanager-default-0\" (UID: \"637374b2-565d-4fbc-ba62-e151d5fef990\") " pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:01 crc kubenswrapper[5109]: I0217 00:24:01.166833 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-8274af7b-78b9-4a5d-bcf0-b95aefac60e6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8274af7b-78b9-4a5d-bcf0-b95aefac60e6\") pod \"alertmanager-default-0\" (UID: \"637374b2-565d-4fbc-ba62-e151d5fef990\") " pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:01 crc kubenswrapper[5109]: I0217 00:24:01.166880 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/637374b2-565d-4fbc-ba62-e151d5fef990-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"637374b2-565d-4fbc-ba62-e151d5fef990\") " pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:01 crc kubenswrapper[5109]: I0217 00:24:01.166959 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4pcfx\" (UniqueName: \"kubernetes.io/projected/637374b2-565d-4fbc-ba62-e151d5fef990-kube-api-access-4pcfx\") pod \"alertmanager-default-0\" (UID: \"637374b2-565d-4fbc-ba62-e151d5fef990\") " pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:01 crc kubenswrapper[5109]: I0217 00:24:01.166991 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/637374b2-565d-4fbc-ba62-e151d5fef990-config-out\") pod \"alertmanager-default-0\" (UID: \"637374b2-565d-4fbc-ba62-e151d5fef990\") " pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:01 crc kubenswrapper[5109]: I0217 00:24:01.167078 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/637374b2-565d-4fbc-ba62-e151d5fef990-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"637374b2-565d-4fbc-ba62-e151d5fef990\") " pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:01 crc kubenswrapper[5109]: E0217 00:24:01.167475 5109 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 17 00:24:01 crc kubenswrapper[5109]: E0217 00:24:01.167665 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/637374b2-565d-4fbc-ba62-e151d5fef990-secret-default-alertmanager-proxy-tls podName:637374b2-565d-4fbc-ba62-e151d5fef990 nodeName:}" failed. No retries permitted until 2026-02-17 00:24:01.667632417 +0000 UTC m=+912.999187215 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/637374b2-565d-4fbc-ba62-e151d5fef990-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "637374b2-565d-4fbc-ba62-e151d5fef990") : secret "default-alertmanager-proxy-tls" not found Feb 17 00:24:01 crc kubenswrapper[5109]: I0217 00:24:01.172126 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/637374b2-565d-4fbc-ba62-e151d5fef990-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"637374b2-565d-4fbc-ba62-e151d5fef990\") " pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:01 crc kubenswrapper[5109]: I0217 00:24:01.172433 5109 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 00:24:01 crc kubenswrapper[5109]: I0217 00:24:01.172473 5109 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-8274af7b-78b9-4a5d-bcf0-b95aefac60e6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8274af7b-78b9-4a5d-bcf0-b95aefac60e6\") pod \"alertmanager-default-0\" (UID: \"637374b2-565d-4fbc-ba62-e151d5fef990\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7b6aa9b0d8708b88e2bfdaa1bbbe507c7b4de3cbd3eba84ac19faef62500f7b6/globalmount\"" pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:01 crc kubenswrapper[5109]: I0217 00:24:01.173103 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/637374b2-565d-4fbc-ba62-e151d5fef990-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"637374b2-565d-4fbc-ba62-e151d5fef990\") " pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:01 crc kubenswrapper[5109]: I0217 00:24:01.173137 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/637374b2-565d-4fbc-ba62-e151d5fef990-web-config\") pod \"alertmanager-default-0\" (UID: \"637374b2-565d-4fbc-ba62-e151d5fef990\") " pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:01 crc kubenswrapper[5109]: I0217 00:24:01.185206 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/637374b2-565d-4fbc-ba62-e151d5fef990-config-out\") pod \"alertmanager-default-0\" (UID: \"637374b2-565d-4fbc-ba62-e151d5fef990\") " pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:01 crc kubenswrapper[5109]: I0217 00:24:01.187122 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/637374b2-565d-4fbc-ba62-e151d5fef990-tls-assets\") pod \"alertmanager-default-0\" (UID: \"637374b2-565d-4fbc-ba62-e151d5fef990\") " pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:01 crc kubenswrapper[5109]: I0217 00:24:01.190020 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pcfx\" (UniqueName: \"kubernetes.io/projected/637374b2-565d-4fbc-ba62-e151d5fef990-kube-api-access-4pcfx\") pod \"alertmanager-default-0\" (UID: \"637374b2-565d-4fbc-ba62-e151d5fef990\") " pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:01 crc kubenswrapper[5109]: I0217 00:24:01.190918 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/637374b2-565d-4fbc-ba62-e151d5fef990-config-volume\") pod \"alertmanager-default-0\" (UID: \"637374b2-565d-4fbc-ba62-e151d5fef990\") " pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:01 crc kubenswrapper[5109]: I0217 00:24:01.205640 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-8274af7b-78b9-4a5d-bcf0-b95aefac60e6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8274af7b-78b9-4a5d-bcf0-b95aefac60e6\") pod \"alertmanager-default-0\" (UID: \"637374b2-565d-4fbc-ba62-e151d5fef990\") " pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:01 crc kubenswrapper[5109]: I0217 00:24:01.674464 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/637374b2-565d-4fbc-ba62-e151d5fef990-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"637374b2-565d-4fbc-ba62-e151d5fef990\") " pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:01 crc kubenswrapper[5109]: E0217 00:24:01.674678 5109 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 17 00:24:01 crc kubenswrapper[5109]: E0217 00:24:01.674933 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/637374b2-565d-4fbc-ba62-e151d5fef990-secret-default-alertmanager-proxy-tls podName:637374b2-565d-4fbc-ba62-e151d5fef990 nodeName:}" failed. No retries permitted until 2026-02-17 00:24:02.674913552 +0000 UTC m=+914.006468310 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/637374b2-565d-4fbc-ba62-e151d5fef990-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "637374b2-565d-4fbc-ba62-e151d5fef990") : secret "default-alertmanager-proxy-tls" not found Feb 17 00:24:01 crc kubenswrapper[5109]: I0217 00:24:01.832339 5109 generic.go:358] "Generic (PLEG): container finished" podID="d92b9849-5e99-4939-abb4-27b6fe87adb3" containerID="7517c2663347e23672ad4a994da1816683997a1ad1861c35080e8702b33a5d0e" exitCode=0 Feb 17 00:24:01 crc kubenswrapper[5109]: I0217 00:24:01.832439 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"d92b9849-5e99-4939-abb4-27b6fe87adb3","Type":"ContainerDied","Data":"7517c2663347e23672ad4a994da1816683997a1ad1861c35080e8702b33a5d0e"} Feb 17 00:24:02 crc kubenswrapper[5109]: I0217 00:24:02.692988 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/637374b2-565d-4fbc-ba62-e151d5fef990-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"637374b2-565d-4fbc-ba62-e151d5fef990\") " pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:02 crc kubenswrapper[5109]: E0217 00:24:02.693276 5109 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 17 00:24:02 crc kubenswrapper[5109]: E0217 00:24:02.693909 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/637374b2-565d-4fbc-ba62-e151d5fef990-secret-default-alertmanager-proxy-tls podName:637374b2-565d-4fbc-ba62-e151d5fef990 nodeName:}" failed. No retries permitted until 2026-02-17 00:24:04.693570191 +0000 UTC m=+916.025124949 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/637374b2-565d-4fbc-ba62-e151d5fef990-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "637374b2-565d-4fbc-ba62-e151d5fef990") : secret "default-alertmanager-proxy-tls" not found Feb 17 00:24:04 crc kubenswrapper[5109]: I0217 00:24:04.590037 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29521464-pzhn2"] Feb 17 00:24:04 crc kubenswrapper[5109]: I0217 00:24:04.721908 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/637374b2-565d-4fbc-ba62-e151d5fef990-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"637374b2-565d-4fbc-ba62-e151d5fef990\") " pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:04 crc kubenswrapper[5109]: I0217 00:24:04.728747 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/637374b2-565d-4fbc-ba62-e151d5fef990-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"637374b2-565d-4fbc-ba62-e151d5fef990\") " pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:04 crc kubenswrapper[5109]: I0217 00:24:04.826590 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Feb 17 00:24:05 crc kubenswrapper[5109]: I0217 00:24:05.862508 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29521464-pzhn2" event={"ID":"f09f79a5-2bda-4621-97cc-fb9a19d50348","Type":"ContainerStarted","Data":"75a6b3bb8f5f4d75696221ddd2bbdce8d2eee7c7dab27cdce1a94b1e79919908"} Feb 17 00:24:06 crc kubenswrapper[5109]: I0217 00:24:06.915351 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 17 00:24:08 crc kubenswrapper[5109]: W0217 00:24:08.910123 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod637374b2_565d_4fbc_ba62_e151d5fef990.slice/crio-32fc031b8e0717916b1aa173dcac42b83f4dbc1a3e21a2ea690405698a56dd57 WatchSource:0}: Error finding container 32fc031b8e0717916b1aa173dcac42b83f4dbc1a3e21a2ea690405698a56dd57: Status 404 returned error can't find the container with id 32fc031b8e0717916b1aa173dcac42b83f4dbc1a3e21a2ea690405698a56dd57 Feb 17 00:24:09 crc kubenswrapper[5109]: I0217 00:24:09.891073 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-pp685" event={"ID":"192a2b06-be82-48ba-ba35-bb94271c00a4","Type":"ContainerStarted","Data":"53dde7fab306882eefe125151724244ccafafcce58a7825edf8fa17dc6818dda"} Feb 17 00:24:09 crc kubenswrapper[5109]: I0217 00:24:09.892403 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"637374b2-565d-4fbc-ba62-e151d5fef990","Type":"ContainerStarted","Data":"32fc031b8e0717916b1aa173dcac42b83f4dbc1a3e21a2ea690405698a56dd57"} Feb 17 00:24:09 crc kubenswrapper[5109]: I0217 00:24:09.893830 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29521464-pzhn2" event={"ID":"f09f79a5-2bda-4621-97cc-fb9a19d50348","Type":"ContainerStarted","Data":"ddee3129cf41bf9312e30f8f6a22678c27fbc042d7a66610b5915bc56ebf1645"} Feb 17 00:24:09 crc kubenswrapper[5109]: I0217 00:24:09.895317 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"d92b9849-5e99-4939-abb4-27b6fe87adb3","Type":"ContainerStarted","Data":"60c2f69d5c0ea4d5e6ec3abb7526f4f436af18350e861d26dfa0638fb151627c"} Feb 17 00:24:09 crc kubenswrapper[5109]: I0217 00:24:09.921368 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-pp685" podStartSLOduration=4.742455585 podStartE2EDuration="12.921349055s" podCreationTimestamp="2026-02-17 00:23:57 +0000 UTC" firstStartedPulling="2026-02-17 00:23:58.101106589 +0000 UTC m=+909.432661387" lastFinishedPulling="2026-02-17 00:24:06.280000099 +0000 UTC m=+917.611554857" observedRunningTime="2026-02-17 00:24:09.917947605 +0000 UTC m=+921.249502363" watchObservedRunningTime="2026-02-17 00:24:09.921349055 +0000 UTC m=+921.252903813" Feb 17 00:24:09 crc kubenswrapper[5109]: I0217 00:24:09.933899 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29521464-pzhn2" podStartSLOduration=6.583455115 podStartE2EDuration="9.933882038s" podCreationTimestamp="2026-02-17 00:24:00 +0000 UTC" firstStartedPulling="2026-02-17 00:24:05.732468927 +0000 UTC m=+917.064023715" lastFinishedPulling="2026-02-17 00:24:09.08289588 +0000 UTC m=+920.414450638" observedRunningTime="2026-02-17 00:24:09.932784419 +0000 UTC m=+921.264339177" watchObservedRunningTime="2026-02-17 00:24:09.933882038 +0000 UTC m=+921.265436796" Feb 17 00:24:10 crc kubenswrapper[5109]: I0217 00:24:10.903151 5109 generic.go:358] "Generic (PLEG): container finished" podID="f09f79a5-2bda-4621-97cc-fb9a19d50348" containerID="ddee3129cf41bf9312e30f8f6a22678c27fbc042d7a66610b5915bc56ebf1645" exitCode=0 Feb 17 00:24:10 crc kubenswrapper[5109]: I0217 00:24:10.903636 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29521464-pzhn2" event={"ID":"f09f79a5-2bda-4621-97cc-fb9a19d50348","Type":"ContainerDied","Data":"ddee3129cf41bf9312e30f8f6a22678c27fbc042d7a66610b5915bc56ebf1645"} Feb 17 00:24:11 crc kubenswrapper[5109]: I0217 00:24:11.911640 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"d92b9849-5e99-4939-abb4-27b6fe87adb3","Type":"ContainerStarted","Data":"d77a22b44f8c28a4740ef0e38a1546837dafac9f898beefa8a1c2ce5b88f1046"} Feb 17 00:24:11 crc kubenswrapper[5109]: I0217 00:24:11.913829 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"637374b2-565d-4fbc-ba62-e151d5fef990","Type":"ContainerStarted","Data":"b24a79af371fdbc40f81197fdc56e9f6bccf2d6641aee3f7a300f265fb69e13d"} Feb 17 00:24:12 crc kubenswrapper[5109]: I0217 00:24:12.234324 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521464-pzhn2" Feb 17 00:24:12 crc kubenswrapper[5109]: I0217 00:24:12.334899 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v75rw\" (UniqueName: \"kubernetes.io/projected/f09f79a5-2bda-4621-97cc-fb9a19d50348-kube-api-access-v75rw\") pod \"f09f79a5-2bda-4621-97cc-fb9a19d50348\" (UID: \"f09f79a5-2bda-4621-97cc-fb9a19d50348\") " Feb 17 00:24:12 crc kubenswrapper[5109]: I0217 00:24:12.360871 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f09f79a5-2bda-4621-97cc-fb9a19d50348-kube-api-access-v75rw" (OuterVolumeSpecName: "kube-api-access-v75rw") pod "f09f79a5-2bda-4621-97cc-fb9a19d50348" (UID: "f09f79a5-2bda-4621-97cc-fb9a19d50348"). InnerVolumeSpecName "kube-api-access-v75rw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:24:12 crc kubenswrapper[5109]: I0217 00:24:12.436880 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v75rw\" (UniqueName: \"kubernetes.io/projected/f09f79a5-2bda-4621-97cc-fb9a19d50348-kube-api-access-v75rw\") on node \"crc\" DevicePath \"\"" Feb 17 00:24:12 crc kubenswrapper[5109]: I0217 00:24:12.566825 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29521458-7jmzl"] Feb 17 00:24:12 crc kubenswrapper[5109]: I0217 00:24:12.571153 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29521458-7jmzl"] Feb 17 00:24:12 crc kubenswrapper[5109]: I0217 00:24:12.921303 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521464-pzhn2" Feb 17 00:24:12 crc kubenswrapper[5109]: I0217 00:24:12.921339 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29521464-pzhn2" event={"ID":"f09f79a5-2bda-4621-97cc-fb9a19d50348","Type":"ContainerDied","Data":"75a6b3bb8f5f4d75696221ddd2bbdce8d2eee7c7dab27cdce1a94b1e79919908"} Feb 17 00:24:12 crc kubenswrapper[5109]: I0217 00:24:12.921402 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75a6b3bb8f5f4d75696221ddd2bbdce8d2eee7c7dab27cdce1a94b1e79919908" Feb 17 00:24:13 crc kubenswrapper[5109]: I0217 00:24:13.474490 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="216e9b1d-31c8-4016-9031-97f9f4ec879e" path="/var/lib/kubelet/pods/216e9b1d-31c8-4016-9031-97f9f4ec879e/volumes" Feb 17 00:24:14 crc kubenswrapper[5109]: I0217 00:24:14.092755 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g"] Feb 17 00:24:14 crc kubenswrapper[5109]: I0217 00:24:14.093366 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f09f79a5-2bda-4621-97cc-fb9a19d50348" containerName="oc" Feb 17 00:24:14 crc kubenswrapper[5109]: I0217 00:24:14.093380 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="f09f79a5-2bda-4621-97cc-fb9a19d50348" containerName="oc" Feb 17 00:24:14 crc kubenswrapper[5109]: I0217 00:24:14.093526 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="f09f79a5-2bda-4621-97cc-fb9a19d50348" containerName="oc" Feb 17 00:24:14 crc kubenswrapper[5109]: I0217 00:24:14.143080 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g"] Feb 17 00:24:14 crc kubenswrapper[5109]: I0217 00:24:14.143278 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g" Feb 17 00:24:14 crc kubenswrapper[5109]: I0217 00:24:14.146418 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-dockercfg-tqdpl\"" Feb 17 00:24:14 crc kubenswrapper[5109]: I0217 00:24:14.146952 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-sg-core-configmap\"" Feb 17 00:24:14 crc kubenswrapper[5109]: I0217 00:24:14.149752 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-session-secret\"" Feb 17 00:24:14 crc kubenswrapper[5109]: I0217 00:24:14.153173 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-proxy-tls\"" Feb 17 00:24:14 crc kubenswrapper[5109]: I0217 00:24:14.264137 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e691a630-aefc-4763-9b47-1ce96aac5fa7-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-sdm8g\" (UID: \"e691a630-aefc-4763-9b47-1ce96aac5fa7\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g" Feb 17 00:24:14 crc kubenswrapper[5109]: I0217 00:24:14.264435 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e691a630-aefc-4763-9b47-1ce96aac5fa7-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-sdm8g\" (UID: \"e691a630-aefc-4763-9b47-1ce96aac5fa7\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g" Feb 17 00:24:14 crc kubenswrapper[5109]: I0217 00:24:14.264546 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e691a630-aefc-4763-9b47-1ce96aac5fa7-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-sdm8g\" (UID: \"e691a630-aefc-4763-9b47-1ce96aac5fa7\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g" Feb 17 00:24:14 crc kubenswrapper[5109]: I0217 00:24:14.264719 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/e691a630-aefc-4763-9b47-1ce96aac5fa7-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-sdm8g\" (UID: \"e691a630-aefc-4763-9b47-1ce96aac5fa7\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g" Feb 17 00:24:14 crc kubenswrapper[5109]: I0217 00:24:14.264828 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp5g5\" (UniqueName: \"kubernetes.io/projected/e691a630-aefc-4763-9b47-1ce96aac5fa7-kube-api-access-gp5g5\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-sdm8g\" (UID: \"e691a630-aefc-4763-9b47-1ce96aac5fa7\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g" Feb 17 00:24:14 crc kubenswrapper[5109]: I0217 00:24:14.365870 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gp5g5\" (UniqueName: \"kubernetes.io/projected/e691a630-aefc-4763-9b47-1ce96aac5fa7-kube-api-access-gp5g5\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-sdm8g\" (UID: \"e691a630-aefc-4763-9b47-1ce96aac5fa7\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g" Feb 17 00:24:14 crc kubenswrapper[5109]: I0217 00:24:14.365919 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e691a630-aefc-4763-9b47-1ce96aac5fa7-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-sdm8g\" (UID: \"e691a630-aefc-4763-9b47-1ce96aac5fa7\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g" Feb 17 00:24:14 crc kubenswrapper[5109]: I0217 00:24:14.365947 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e691a630-aefc-4763-9b47-1ce96aac5fa7-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-sdm8g\" (UID: \"e691a630-aefc-4763-9b47-1ce96aac5fa7\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g" Feb 17 00:24:14 crc kubenswrapper[5109]: I0217 00:24:14.365966 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e691a630-aefc-4763-9b47-1ce96aac5fa7-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-sdm8g\" (UID: \"e691a630-aefc-4763-9b47-1ce96aac5fa7\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g" Feb 17 00:24:14 crc kubenswrapper[5109]: I0217 00:24:14.366054 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/e691a630-aefc-4763-9b47-1ce96aac5fa7-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-sdm8g\" (UID: \"e691a630-aefc-4763-9b47-1ce96aac5fa7\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g" Feb 17 00:24:14 crc kubenswrapper[5109]: I0217 00:24:14.366741 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e691a630-aefc-4763-9b47-1ce96aac5fa7-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-sdm8g\" (UID: \"e691a630-aefc-4763-9b47-1ce96aac5fa7\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g" Feb 17 00:24:14 crc kubenswrapper[5109]: E0217 00:24:14.366424 5109 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Feb 17 00:24:14 crc kubenswrapper[5109]: E0217 00:24:14.366850 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e691a630-aefc-4763-9b47-1ce96aac5fa7-default-cloud1-coll-meter-proxy-tls podName:e691a630-aefc-4763-9b47-1ce96aac5fa7 nodeName:}" failed. No retries permitted until 2026-02-17 00:24:14.866832157 +0000 UTC m=+926.198386915 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/e691a630-aefc-4763-9b47-1ce96aac5fa7-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-787645d794-sdm8g" (UID: "e691a630-aefc-4763-9b47-1ce96aac5fa7") : secret "default-cloud1-coll-meter-proxy-tls" not found Feb 17 00:24:14 crc kubenswrapper[5109]: I0217 00:24:14.367009 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e691a630-aefc-4763-9b47-1ce96aac5fa7-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-sdm8g\" (UID: \"e691a630-aefc-4763-9b47-1ce96aac5fa7\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g" Feb 17 00:24:14 crc kubenswrapper[5109]: I0217 00:24:14.380361 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/e691a630-aefc-4763-9b47-1ce96aac5fa7-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-sdm8g\" (UID: \"e691a630-aefc-4763-9b47-1ce96aac5fa7\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g" Feb 17 00:24:14 crc kubenswrapper[5109]: I0217 00:24:14.386143 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp5g5\" (UniqueName: \"kubernetes.io/projected/e691a630-aefc-4763-9b47-1ce96aac5fa7-kube-api-access-gp5g5\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-sdm8g\" (UID: \"e691a630-aefc-4763-9b47-1ce96aac5fa7\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g" Feb 17 00:24:14 crc kubenswrapper[5109]: I0217 00:24:14.872427 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e691a630-aefc-4763-9b47-1ce96aac5fa7-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-sdm8g\" (UID: \"e691a630-aefc-4763-9b47-1ce96aac5fa7\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g" Feb 17 00:24:14 crc kubenswrapper[5109]: E0217 00:24:14.872565 5109 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Feb 17 00:24:14 crc kubenswrapper[5109]: E0217 00:24:14.872627 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e691a630-aefc-4763-9b47-1ce96aac5fa7-default-cloud1-coll-meter-proxy-tls podName:e691a630-aefc-4763-9b47-1ce96aac5fa7 nodeName:}" failed. No retries permitted until 2026-02-17 00:24:15.872612582 +0000 UTC m=+927.204167340 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/e691a630-aefc-4763-9b47-1ce96aac5fa7-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-787645d794-sdm8g" (UID: "e691a630-aefc-4763-9b47-1ce96aac5fa7") : secret "default-cloud1-coll-meter-proxy-tls" not found Feb 17 00:24:15 crc kubenswrapper[5109]: I0217 00:24:15.894971 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e691a630-aefc-4763-9b47-1ce96aac5fa7-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-sdm8g\" (UID: \"e691a630-aefc-4763-9b47-1ce96aac5fa7\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g" Feb 17 00:24:15 crc kubenswrapper[5109]: I0217 00:24:15.902120 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e691a630-aefc-4763-9b47-1ce96aac5fa7-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-sdm8g\" (UID: \"e691a630-aefc-4763-9b47-1ce96aac5fa7\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g" Feb 17 00:24:15 crc kubenswrapper[5109]: I0217 00:24:15.960726 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g" Feb 17 00:24:16 crc kubenswrapper[5109]: I0217 00:24:16.385247 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb"] Feb 17 00:24:16 crc kubenswrapper[5109]: I0217 00:24:16.394818 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb"] Feb 17 00:24:16 crc kubenswrapper[5109]: I0217 00:24:16.394951 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb" Feb 17 00:24:16 crc kubenswrapper[5109]: I0217 00:24:16.397587 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-sg-core-configmap\"" Feb 17 00:24:16 crc kubenswrapper[5109]: I0217 00:24:16.398230 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-proxy-tls\"" Feb 17 00:24:16 crc kubenswrapper[5109]: I0217 00:24:16.503215 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5970fe16-c4c2-4d25-9d35-f850635f1a63-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb\" (UID: \"5970fe16-c4c2-4d25-9d35-f850635f1a63\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb" Feb 17 00:24:16 crc kubenswrapper[5109]: I0217 00:24:16.503258 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/5970fe16-c4c2-4d25-9d35-f850635f1a63-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb\" (UID: \"5970fe16-c4c2-4d25-9d35-f850635f1a63\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb" Feb 17 00:24:16 crc kubenswrapper[5109]: I0217 00:24:16.503334 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmdv5\" (UniqueName: \"kubernetes.io/projected/5970fe16-c4c2-4d25-9d35-f850635f1a63-kube-api-access-bmdv5\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb\" (UID: \"5970fe16-c4c2-4d25-9d35-f850635f1a63\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb" Feb 17 00:24:16 crc kubenswrapper[5109]: I0217 00:24:16.503405 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/5970fe16-c4c2-4d25-9d35-f850635f1a63-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb\" (UID: \"5970fe16-c4c2-4d25-9d35-f850635f1a63\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb" Feb 17 00:24:16 crc kubenswrapper[5109]: I0217 00:24:16.503423 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5970fe16-c4c2-4d25-9d35-f850635f1a63-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb\" (UID: \"5970fe16-c4c2-4d25-9d35-f850635f1a63\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb" Feb 17 00:24:16 crc kubenswrapper[5109]: I0217 00:24:16.604821 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/5970fe16-c4c2-4d25-9d35-f850635f1a63-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb\" (UID: \"5970fe16-c4c2-4d25-9d35-f850635f1a63\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb" Feb 17 00:24:16 crc kubenswrapper[5109]: I0217 00:24:16.604886 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5970fe16-c4c2-4d25-9d35-f850635f1a63-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb\" (UID: \"5970fe16-c4c2-4d25-9d35-f850635f1a63\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb" Feb 17 00:24:16 crc kubenswrapper[5109]: I0217 00:24:16.604974 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5970fe16-c4c2-4d25-9d35-f850635f1a63-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb\" (UID: \"5970fe16-c4c2-4d25-9d35-f850635f1a63\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb" Feb 17 00:24:16 crc kubenswrapper[5109]: I0217 00:24:16.604999 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/5970fe16-c4c2-4d25-9d35-f850635f1a63-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb\" (UID: \"5970fe16-c4c2-4d25-9d35-f850635f1a63\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb" Feb 17 00:24:16 crc kubenswrapper[5109]: I0217 00:24:16.605057 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bmdv5\" (UniqueName: \"kubernetes.io/projected/5970fe16-c4c2-4d25-9d35-f850635f1a63-kube-api-access-bmdv5\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb\" (UID: \"5970fe16-c4c2-4d25-9d35-f850635f1a63\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb" Feb 17 00:24:16 crc kubenswrapper[5109]: E0217 00:24:16.605084 5109 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 17 00:24:16 crc kubenswrapper[5109]: E0217 00:24:16.605198 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5970fe16-c4c2-4d25-9d35-f850635f1a63-default-cloud1-ceil-meter-proxy-tls podName:5970fe16-c4c2-4d25-9d35-f850635f1a63 nodeName:}" failed. No retries permitted until 2026-02-17 00:24:17.105173062 +0000 UTC m=+928.436727820 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/5970fe16-c4c2-4d25-9d35-f850635f1a63-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb" (UID: "5970fe16-c4c2-4d25-9d35-f850635f1a63") : secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 17 00:24:16 crc kubenswrapper[5109]: I0217 00:24:16.605362 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5970fe16-c4c2-4d25-9d35-f850635f1a63-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb\" (UID: \"5970fe16-c4c2-4d25-9d35-f850635f1a63\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb" Feb 17 00:24:16 crc kubenswrapper[5109]: I0217 00:24:16.605893 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/5970fe16-c4c2-4d25-9d35-f850635f1a63-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb\" (UID: \"5970fe16-c4c2-4d25-9d35-f850635f1a63\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb" Feb 17 00:24:16 crc kubenswrapper[5109]: I0217 00:24:16.617076 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/5970fe16-c4c2-4d25-9d35-f850635f1a63-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb\" (UID: \"5970fe16-c4c2-4d25-9d35-f850635f1a63\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb" Feb 17 00:24:16 crc kubenswrapper[5109]: I0217 00:24:16.625318 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmdv5\" (UniqueName: \"kubernetes.io/projected/5970fe16-c4c2-4d25-9d35-f850635f1a63-kube-api-access-bmdv5\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb\" (UID: \"5970fe16-c4c2-4d25-9d35-f850635f1a63\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb" Feb 17 00:24:17 crc kubenswrapper[5109]: I0217 00:24:17.114729 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5970fe16-c4c2-4d25-9d35-f850635f1a63-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb\" (UID: \"5970fe16-c4c2-4d25-9d35-f850635f1a63\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb" Feb 17 00:24:17 crc kubenswrapper[5109]: E0217 00:24:17.114878 5109 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 17 00:24:17 crc kubenswrapper[5109]: E0217 00:24:17.114931 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5970fe16-c4c2-4d25-9d35-f850635f1a63-default-cloud1-ceil-meter-proxy-tls podName:5970fe16-c4c2-4d25-9d35-f850635f1a63 nodeName:}" failed. No retries permitted until 2026-02-17 00:24:18.114917282 +0000 UTC m=+929.446472040 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/5970fe16-c4c2-4d25-9d35-f850635f1a63-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb" (UID: "5970fe16-c4c2-4d25-9d35-f850635f1a63") : secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 17 00:24:17 crc kubenswrapper[5109]: I0217 00:24:17.956961 5109 generic.go:358] "Generic (PLEG): container finished" podID="637374b2-565d-4fbc-ba62-e151d5fef990" containerID="b24a79af371fdbc40f81197fdc56e9f6bccf2d6641aee3f7a300f265fb69e13d" exitCode=0 Feb 17 00:24:17 crc kubenswrapper[5109]: I0217 00:24:17.957055 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"637374b2-565d-4fbc-ba62-e151d5fef990","Type":"ContainerDied","Data":"b24a79af371fdbc40f81197fdc56e9f6bccf2d6641aee3f7a300f265fb69e13d"} Feb 17 00:24:18 crc kubenswrapper[5109]: I0217 00:24:18.130529 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5970fe16-c4c2-4d25-9d35-f850635f1a63-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb\" (UID: \"5970fe16-c4c2-4d25-9d35-f850635f1a63\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb" Feb 17 00:24:18 crc kubenswrapper[5109]: I0217 00:24:18.135204 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5970fe16-c4c2-4d25-9d35-f850635f1a63-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb\" (UID: \"5970fe16-c4c2-4d25-9d35-f850635f1a63\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb" Feb 17 00:24:18 crc kubenswrapper[5109]: I0217 00:24:18.215711 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb" Feb 17 00:24:18 crc kubenswrapper[5109]: I0217 00:24:18.615649 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g"] Feb 17 00:24:18 crc kubenswrapper[5109]: W0217 00:24:18.634488 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode691a630_aefc_4763_9b47_1ce96aac5fa7.slice/crio-23625fd1dec9e93e3a64d74351b765f00c422b01ef55ed18ca0c91de65db9fbd WatchSource:0}: Error finding container 23625fd1dec9e93e3a64d74351b765f00c422b01ef55ed18ca0c91de65db9fbd: Status 404 returned error can't find the container with id 23625fd1dec9e93e3a64d74351b765f00c422b01ef55ed18ca0c91de65db9fbd Feb 17 00:24:18 crc kubenswrapper[5109]: I0217 00:24:18.692702 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb"] Feb 17 00:24:18 crc kubenswrapper[5109]: W0217 00:24:18.700245 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5970fe16_c4c2_4d25_9d35_f850635f1a63.slice/crio-ccbabdec8112466635db3177a414b29546252ca5748d7b03ed3249ebac3613a2 WatchSource:0}: Error finding container ccbabdec8112466635db3177a414b29546252ca5748d7b03ed3249ebac3613a2: Status 404 returned error can't find the container with id ccbabdec8112466635db3177a414b29546252ca5748d7b03ed3249ebac3613a2 Feb 17 00:24:18 crc kubenswrapper[5109]: I0217 00:24:18.964966 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g" event={"ID":"e691a630-aefc-4763-9b47-1ce96aac5fa7","Type":"ContainerStarted","Data":"23625fd1dec9e93e3a64d74351b765f00c422b01ef55ed18ca0c91de65db9fbd"} Feb 17 00:24:18 crc kubenswrapper[5109]: I0217 00:24:18.968780 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"d92b9849-5e99-4939-abb4-27b6fe87adb3","Type":"ContainerStarted","Data":"c8666514a859c17c6451e9d11478480a20317a21d0bc65c1e9aea20e7643d770"} Feb 17 00:24:18 crc kubenswrapper[5109]: I0217 00:24:18.969626 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb" event={"ID":"5970fe16-c4c2-4d25-9d35-f850635f1a63","Type":"ContainerStarted","Data":"ccbabdec8112466635db3177a414b29546252ca5748d7b03ed3249ebac3613a2"} Feb 17 00:24:18 crc kubenswrapper[5109]: I0217 00:24:18.997965 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/prometheus-default-0" Feb 17 00:24:18 crc kubenswrapper[5109]: I0217 00:24:18.998005 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Feb 17 00:24:18 crc kubenswrapper[5109]: I0217 00:24:18.998997 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=5.313505185 podStartE2EDuration="33.998980773s" podCreationTimestamp="2026-02-17 00:23:45 +0000 UTC" firstStartedPulling="2026-02-17 00:23:49.580159529 +0000 UTC m=+900.911714297" lastFinishedPulling="2026-02-17 00:24:18.265635127 +0000 UTC m=+929.597189885" observedRunningTime="2026-02-17 00:24:18.995170062 +0000 UTC m=+930.326724830" watchObservedRunningTime="2026-02-17 00:24:18.998980773 +0000 UTC m=+930.330535531" Feb 17 00:24:19 crc kubenswrapper[5109]: I0217 00:24:19.041579 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Feb 17 00:24:19 crc kubenswrapper[5109]: I0217 00:24:19.857206 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx"] Feb 17 00:24:19 crc kubenswrapper[5109]: I0217 00:24:19.864195 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx" Feb 17 00:24:19 crc kubenswrapper[5109]: I0217 00:24:19.868139 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-proxy-tls\"" Feb 17 00:24:19 crc kubenswrapper[5109]: I0217 00:24:19.868350 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-sg-core-configmap\"" Feb 17 00:24:19 crc kubenswrapper[5109]: I0217 00:24:19.877614 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx"] Feb 17 00:24:19 crc kubenswrapper[5109]: I0217 00:24:19.956686 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/061c1a1b-358c-46f3-818a-3531ced45ab0-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx\" (UID: \"061c1a1b-358c-46f3-818a-3531ced45ab0\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx" Feb 17 00:24:19 crc kubenswrapper[5109]: I0217 00:24:19.956749 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/061c1a1b-358c-46f3-818a-3531ced45ab0-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx\" (UID: \"061c1a1b-358c-46f3-818a-3531ced45ab0\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx" Feb 17 00:24:19 crc kubenswrapper[5109]: I0217 00:24:19.956793 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/061c1a1b-358c-46f3-818a-3531ced45ab0-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx\" (UID: \"061c1a1b-358c-46f3-818a-3531ced45ab0\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx" Feb 17 00:24:19 crc kubenswrapper[5109]: I0217 00:24:19.956815 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/061c1a1b-358c-46f3-818a-3531ced45ab0-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx\" (UID: \"061c1a1b-358c-46f3-818a-3531ced45ab0\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx" Feb 17 00:24:19 crc kubenswrapper[5109]: I0217 00:24:19.956837 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tst6t\" (UniqueName: \"kubernetes.io/projected/061c1a1b-358c-46f3-818a-3531ced45ab0-kube-api-access-tst6t\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx\" (UID: \"061c1a1b-358c-46f3-818a-3531ced45ab0\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx" Feb 17 00:24:20 crc kubenswrapper[5109]: I0217 00:24:20.016055 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Feb 17 00:24:20 crc kubenswrapper[5109]: I0217 00:24:20.065068 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/061c1a1b-358c-46f3-818a-3531ced45ab0-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx\" (UID: \"061c1a1b-358c-46f3-818a-3531ced45ab0\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx" Feb 17 00:24:20 crc kubenswrapper[5109]: I0217 00:24:20.065178 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/061c1a1b-358c-46f3-818a-3531ced45ab0-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx\" (UID: \"061c1a1b-358c-46f3-818a-3531ced45ab0\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx" Feb 17 00:24:20 crc kubenswrapper[5109]: I0217 00:24:20.065217 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/061c1a1b-358c-46f3-818a-3531ced45ab0-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx\" (UID: \"061c1a1b-358c-46f3-818a-3531ced45ab0\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx" Feb 17 00:24:20 crc kubenswrapper[5109]: I0217 00:24:20.066017 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/061c1a1b-358c-46f3-818a-3531ced45ab0-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx\" (UID: \"061c1a1b-358c-46f3-818a-3531ced45ab0\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx" Feb 17 00:24:20 crc kubenswrapper[5109]: E0217 00:24:20.066138 5109 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Feb 17 00:24:20 crc kubenswrapper[5109]: E0217 00:24:20.066229 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/061c1a1b-358c-46f3-818a-3531ced45ab0-default-cloud1-sens-meter-proxy-tls podName:061c1a1b-358c-46f3-818a-3531ced45ab0 nodeName:}" failed. No retries permitted until 2026-02-17 00:24:20.566203722 +0000 UTC m=+931.897758500 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/061c1a1b-358c-46f3-818a-3531ced45ab0-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx" (UID: "061c1a1b-358c-46f3-818a-3531ced45ab0") : secret "default-cloud1-sens-meter-proxy-tls" not found Feb 17 00:24:20 crc kubenswrapper[5109]: I0217 00:24:20.066247 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/061c1a1b-358c-46f3-818a-3531ced45ab0-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx\" (UID: \"061c1a1b-358c-46f3-818a-3531ced45ab0\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx" Feb 17 00:24:20 crc kubenswrapper[5109]: I0217 00:24:20.066329 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/061c1a1b-358c-46f3-818a-3531ced45ab0-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx\" (UID: \"061c1a1b-358c-46f3-818a-3531ced45ab0\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx" Feb 17 00:24:20 crc kubenswrapper[5109]: I0217 00:24:20.067004 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tst6t\" (UniqueName: \"kubernetes.io/projected/061c1a1b-358c-46f3-818a-3531ced45ab0-kube-api-access-tst6t\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx\" (UID: \"061c1a1b-358c-46f3-818a-3531ced45ab0\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx" Feb 17 00:24:20 crc kubenswrapper[5109]: I0217 00:24:20.074618 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/061c1a1b-358c-46f3-818a-3531ced45ab0-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx\" (UID: \"061c1a1b-358c-46f3-818a-3531ced45ab0\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx" Feb 17 00:24:20 crc kubenswrapper[5109]: I0217 00:24:20.085627 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tst6t\" (UniqueName: \"kubernetes.io/projected/061c1a1b-358c-46f3-818a-3531ced45ab0-kube-api-access-tst6t\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx\" (UID: \"061c1a1b-358c-46f3-818a-3531ced45ab0\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx" Feb 17 00:24:20 crc kubenswrapper[5109]: I0217 00:24:20.572736 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/061c1a1b-358c-46f3-818a-3531ced45ab0-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx\" (UID: \"061c1a1b-358c-46f3-818a-3531ced45ab0\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx" Feb 17 00:24:20 crc kubenswrapper[5109]: E0217 00:24:20.573019 5109 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Feb 17 00:24:20 crc kubenswrapper[5109]: E0217 00:24:20.574121 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/061c1a1b-358c-46f3-818a-3531ced45ab0-default-cloud1-sens-meter-proxy-tls podName:061c1a1b-358c-46f3-818a-3531ced45ab0 nodeName:}" failed. No retries permitted until 2026-02-17 00:24:21.574084112 +0000 UTC m=+932.905638930 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/061c1a1b-358c-46f3-818a-3531ced45ab0-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx" (UID: "061c1a1b-358c-46f3-818a-3531ced45ab0") : secret "default-cloud1-sens-meter-proxy-tls" not found Feb 17 00:24:20 crc kubenswrapper[5109]: I0217 00:24:20.983982 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb" event={"ID":"5970fe16-c4c2-4d25-9d35-f850635f1a63","Type":"ContainerStarted","Data":"dca66477e08800113504db5f9cf8de3ca6943054eacf62a9dc5b979e12f06c16"} Feb 17 00:24:20 crc kubenswrapper[5109]: I0217 00:24:20.985685 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"637374b2-565d-4fbc-ba62-e151d5fef990","Type":"ContainerStarted","Data":"f91bc36679211814f58a1826981d9b077e27066fb035fda06c7e46638ebd7431"} Feb 17 00:24:20 crc kubenswrapper[5109]: I0217 00:24:20.987716 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g" event={"ID":"e691a630-aefc-4763-9b47-1ce96aac5fa7","Type":"ContainerStarted","Data":"38fb2589eff821764e67c8e277d0ac1327973f6f31295b80a33974bbb6757723"} Feb 17 00:24:21 crc kubenswrapper[5109]: I0217 00:24:21.585802 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/061c1a1b-358c-46f3-818a-3531ced45ab0-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx\" (UID: \"061c1a1b-358c-46f3-818a-3531ced45ab0\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx" Feb 17 00:24:21 crc kubenswrapper[5109]: I0217 00:24:21.591428 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/061c1a1b-358c-46f3-818a-3531ced45ab0-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx\" (UID: \"061c1a1b-358c-46f3-818a-3531ced45ab0\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx" Feb 17 00:24:21 crc kubenswrapper[5109]: I0217 00:24:21.683142 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx" Feb 17 00:24:22 crc kubenswrapper[5109]: I0217 00:24:22.122196 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx"] Feb 17 00:24:22 crc kubenswrapper[5109]: W0217 00:24:22.140526 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod061c1a1b_358c_46f3_818a_3531ced45ab0.slice/crio-e2147a6da19b81c0749d605999f9711afa0f0f0122d3af41ad34044bbb57ed5a WatchSource:0}: Error finding container e2147a6da19b81c0749d605999f9711afa0f0f0122d3af41ad34044bbb57ed5a: Status 404 returned error can't find the container with id e2147a6da19b81c0749d605999f9711afa0f0f0122d3af41ad34044bbb57ed5a Feb 17 00:24:23 crc kubenswrapper[5109]: I0217 00:24:23.001302 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"637374b2-565d-4fbc-ba62-e151d5fef990","Type":"ContainerStarted","Data":"aecff44e899b5b3909268558ab5f1f6dfe92f15d1167e05d3a4a930d7c05a4fe"} Feb 17 00:24:23 crc kubenswrapper[5109]: I0217 00:24:23.002849 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx" event={"ID":"061c1a1b-358c-46f3-818a-3531ced45ab0","Type":"ContainerStarted","Data":"91d8392e130a13f6ca7496422c4bec20f6bd633c20df697eb20f016b2e2f3522"} Feb 17 00:24:23 crc kubenswrapper[5109]: I0217 00:24:23.002895 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx" event={"ID":"061c1a1b-358c-46f3-818a-3531ced45ab0","Type":"ContainerStarted","Data":"e2147a6da19b81c0749d605999f9711afa0f0f0122d3af41ad34044bbb57ed5a"} Feb 17 00:24:24 crc kubenswrapper[5109]: I0217 00:24:24.014508 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"637374b2-565d-4fbc-ba62-e151d5fef990","Type":"ContainerStarted","Data":"c87edf314df1471b572afe62eca1d5c7b56479c06b76b98d00038f6e82ef6c54"} Feb 17 00:24:24 crc kubenswrapper[5109]: I0217 00:24:24.039354 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=20.086356011 podStartE2EDuration="25.039336674s" podCreationTimestamp="2026-02-17 00:23:59 +0000 UTC" firstStartedPulling="2026-02-17 00:24:17.958276078 +0000 UTC m=+929.289830836" lastFinishedPulling="2026-02-17 00:24:22.911256741 +0000 UTC m=+934.242811499" observedRunningTime="2026-02-17 00:24:24.036512819 +0000 UTC m=+935.368067597" watchObservedRunningTime="2026-02-17 00:24:24.039336674 +0000 UTC m=+935.370891432" Feb 17 00:24:26 crc kubenswrapper[5109]: I0217 00:24:26.743756 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-647df7d596-n5brv"] Feb 17 00:24:26 crc kubenswrapper[5109]: I0217 00:24:26.827405 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-647df7d596-n5brv"] Feb 17 00:24:26 crc kubenswrapper[5109]: I0217 00:24:26.827655 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-647df7d596-n5brv" Feb 17 00:24:26 crc kubenswrapper[5109]: I0217 00:24:26.839583 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-event-sg-core-configmap\"" Feb 17 00:24:26 crc kubenswrapper[5109]: I0217 00:24:26.839696 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-cert\"" Feb 17 00:24:26 crc kubenswrapper[5109]: I0217 00:24:26.979348 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/fcd429e4-9a3c-42da-9b85-664e59d6d2bd-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-647df7d596-n5brv\" (UID: \"fcd429e4-9a3c-42da-9b85-664e59d6d2bd\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-647df7d596-n5brv" Feb 17 00:24:26 crc kubenswrapper[5109]: I0217 00:24:26.979452 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/fcd429e4-9a3c-42da-9b85-664e59d6d2bd-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-647df7d596-n5brv\" (UID: \"fcd429e4-9a3c-42da-9b85-664e59d6d2bd\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-647df7d596-n5brv" Feb 17 00:24:26 crc kubenswrapper[5109]: I0217 00:24:26.979481 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/fcd429e4-9a3c-42da-9b85-664e59d6d2bd-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-647df7d596-n5brv\" (UID: \"fcd429e4-9a3c-42da-9b85-664e59d6d2bd\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-647df7d596-n5brv" Feb 17 00:24:26 crc kubenswrapper[5109]: I0217 00:24:26.979512 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p66c2\" (UniqueName: \"kubernetes.io/projected/fcd429e4-9a3c-42da-9b85-664e59d6d2bd-kube-api-access-p66c2\") pod \"default-cloud1-coll-event-smartgateway-647df7d596-n5brv\" (UID: \"fcd429e4-9a3c-42da-9b85-664e59d6d2bd\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-647df7d596-n5brv" Feb 17 00:24:27 crc kubenswrapper[5109]: I0217 00:24:27.080434 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/fcd429e4-9a3c-42da-9b85-664e59d6d2bd-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-647df7d596-n5brv\" (UID: \"fcd429e4-9a3c-42da-9b85-664e59d6d2bd\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-647df7d596-n5brv" Feb 17 00:24:27 crc kubenswrapper[5109]: I0217 00:24:27.080534 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/fcd429e4-9a3c-42da-9b85-664e59d6d2bd-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-647df7d596-n5brv\" (UID: \"fcd429e4-9a3c-42da-9b85-664e59d6d2bd\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-647df7d596-n5brv" Feb 17 00:24:27 crc kubenswrapper[5109]: I0217 00:24:27.080562 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/fcd429e4-9a3c-42da-9b85-664e59d6d2bd-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-647df7d596-n5brv\" (UID: \"fcd429e4-9a3c-42da-9b85-664e59d6d2bd\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-647df7d596-n5brv" Feb 17 00:24:27 crc kubenswrapper[5109]: I0217 00:24:27.080611 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p66c2\" (UniqueName: \"kubernetes.io/projected/fcd429e4-9a3c-42da-9b85-664e59d6d2bd-kube-api-access-p66c2\") pod \"default-cloud1-coll-event-smartgateway-647df7d596-n5brv\" (UID: \"fcd429e4-9a3c-42da-9b85-664e59d6d2bd\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-647df7d596-n5brv" Feb 17 00:24:27 crc kubenswrapper[5109]: I0217 00:24:27.081005 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/fcd429e4-9a3c-42da-9b85-664e59d6d2bd-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-647df7d596-n5brv\" (UID: \"fcd429e4-9a3c-42da-9b85-664e59d6d2bd\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-647df7d596-n5brv" Feb 17 00:24:27 crc kubenswrapper[5109]: I0217 00:24:27.081688 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/fcd429e4-9a3c-42da-9b85-664e59d6d2bd-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-647df7d596-n5brv\" (UID: \"fcd429e4-9a3c-42da-9b85-664e59d6d2bd\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-647df7d596-n5brv" Feb 17 00:24:27 crc kubenswrapper[5109]: I0217 00:24:27.087912 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/fcd429e4-9a3c-42da-9b85-664e59d6d2bd-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-647df7d596-n5brv\" (UID: \"fcd429e4-9a3c-42da-9b85-664e59d6d2bd\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-647df7d596-n5brv" Feb 17 00:24:27 crc kubenswrapper[5109]: I0217 00:24:27.096932 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p66c2\" (UniqueName: \"kubernetes.io/projected/fcd429e4-9a3c-42da-9b85-664e59d6d2bd-kube-api-access-p66c2\") pod \"default-cloud1-coll-event-smartgateway-647df7d596-n5brv\" (UID: \"fcd429e4-9a3c-42da-9b85-664e59d6d2bd\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-647df7d596-n5brv" Feb 17 00:24:27 crc kubenswrapper[5109]: I0217 00:24:27.148604 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-647df7d596-n5brv" Feb 17 00:24:27 crc kubenswrapper[5109]: I0217 00:24:27.709004 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs"] Feb 17 00:24:27 crc kubenswrapper[5109]: I0217 00:24:27.756695 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs"] Feb 17 00:24:27 crc kubenswrapper[5109]: I0217 00:24:27.756807 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs" Feb 17 00:24:27 crc kubenswrapper[5109]: I0217 00:24:27.759305 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-event-sg-core-configmap\"" Feb 17 00:24:27 crc kubenswrapper[5109]: I0217 00:24:27.892709 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt6gb\" (UniqueName: \"kubernetes.io/projected/e8e44638-f54f-4881-86bc-4d0f985613cb-kube-api-access-qt6gb\") pod \"default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs\" (UID: \"e8e44638-f54f-4881-86bc-4d0f985613cb\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs" Feb 17 00:24:27 crc kubenswrapper[5109]: I0217 00:24:27.892768 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e8e44638-f54f-4881-86bc-4d0f985613cb-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs\" (UID: \"e8e44638-f54f-4881-86bc-4d0f985613cb\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs" Feb 17 00:24:27 crc kubenswrapper[5109]: I0217 00:24:27.892794 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e8e44638-f54f-4881-86bc-4d0f985613cb-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs\" (UID: \"e8e44638-f54f-4881-86bc-4d0f985613cb\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs" Feb 17 00:24:27 crc kubenswrapper[5109]: I0217 00:24:27.892846 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/e8e44638-f54f-4881-86bc-4d0f985613cb-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs\" (UID: \"e8e44638-f54f-4881-86bc-4d0f985613cb\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs" Feb 17 00:24:27 crc kubenswrapper[5109]: I0217 00:24:27.993939 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qt6gb\" (UniqueName: \"kubernetes.io/projected/e8e44638-f54f-4881-86bc-4d0f985613cb-kube-api-access-qt6gb\") pod \"default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs\" (UID: \"e8e44638-f54f-4881-86bc-4d0f985613cb\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs" Feb 17 00:24:27 crc kubenswrapper[5109]: I0217 00:24:27.994007 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e8e44638-f54f-4881-86bc-4d0f985613cb-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs\" (UID: \"e8e44638-f54f-4881-86bc-4d0f985613cb\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs" Feb 17 00:24:27 crc kubenswrapper[5109]: I0217 00:24:27.994035 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e8e44638-f54f-4881-86bc-4d0f985613cb-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs\" (UID: \"e8e44638-f54f-4881-86bc-4d0f985613cb\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs" Feb 17 00:24:27 crc kubenswrapper[5109]: I0217 00:24:27.994090 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/e8e44638-f54f-4881-86bc-4d0f985613cb-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs\" (UID: \"e8e44638-f54f-4881-86bc-4d0f985613cb\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs" Feb 17 00:24:27 crc kubenswrapper[5109]: I0217 00:24:27.994578 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e8e44638-f54f-4881-86bc-4d0f985613cb-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs\" (UID: \"e8e44638-f54f-4881-86bc-4d0f985613cb\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs" Feb 17 00:24:27 crc kubenswrapper[5109]: I0217 00:24:27.995389 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e8e44638-f54f-4881-86bc-4d0f985613cb-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs\" (UID: \"e8e44638-f54f-4881-86bc-4d0f985613cb\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs" Feb 17 00:24:28 crc kubenswrapper[5109]: I0217 00:24:28.012152 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt6gb\" (UniqueName: \"kubernetes.io/projected/e8e44638-f54f-4881-86bc-4d0f985613cb-kube-api-access-qt6gb\") pod \"default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs\" (UID: \"e8e44638-f54f-4881-86bc-4d0f985613cb\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs" Feb 17 00:24:28 crc kubenswrapper[5109]: I0217 00:24:28.015110 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/e8e44638-f54f-4881-86bc-4d0f985613cb-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs\" (UID: \"e8e44638-f54f-4881-86bc-4d0f985613cb\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs" Feb 17 00:24:28 crc kubenswrapper[5109]: I0217 00:24:28.077894 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs" Feb 17 00:24:28 crc kubenswrapper[5109]: I0217 00:24:28.659795 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs"] Feb 17 00:24:28 crc kubenswrapper[5109]: I0217 00:24:28.757980 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-647df7d596-n5brv"] Feb 17 00:24:28 crc kubenswrapper[5109]: W0217 00:24:28.759327 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfcd429e4_9a3c_42da_9b85_664e59d6d2bd.slice/crio-f3bc5de5f3425bc250a6645706afe020a8ba2d94d73a1f7bd21c7833a8bb9032 WatchSource:0}: Error finding container f3bc5de5f3425bc250a6645706afe020a8ba2d94d73a1f7bd21c7833a8bb9032: Status 404 returned error can't find the container with id f3bc5de5f3425bc250a6645706afe020a8ba2d94d73a1f7bd21c7833a8bb9032 Feb 17 00:24:29 crc kubenswrapper[5109]: I0217 00:24:29.056832 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-647df7d596-n5brv" event={"ID":"fcd429e4-9a3c-42da-9b85-664e59d6d2bd","Type":"ContainerStarted","Data":"f3bc5de5f3425bc250a6645706afe020a8ba2d94d73a1f7bd21c7833a8bb9032"} Feb 17 00:24:29 crc kubenswrapper[5109]: I0217 00:24:29.058566 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb" event={"ID":"5970fe16-c4c2-4d25-9d35-f850635f1a63","Type":"ContainerStarted","Data":"97ed8b0a337137d559f5e357cc6da1a9c73080ef294996bf27bd1377234eeb93"} Feb 17 00:24:29 crc kubenswrapper[5109]: I0217 00:24:29.061214 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g" event={"ID":"e691a630-aefc-4763-9b47-1ce96aac5fa7","Type":"ContainerStarted","Data":"97792276675b16e1b39644e0c774f6c761ac0c17143ef4b0b624ba18badbd5ec"} Feb 17 00:24:29 crc kubenswrapper[5109]: I0217 00:24:29.065407 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx" event={"ID":"061c1a1b-358c-46f3-818a-3531ced45ab0","Type":"ContainerStarted","Data":"29e00f964dc1a86c667733325e631cb282810b17fa034d0bf1f9da4302330251"} Feb 17 00:24:29 crc kubenswrapper[5109]: I0217 00:24:29.067891 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs" event={"ID":"e8e44638-f54f-4881-86bc-4d0f985613cb","Type":"ContainerStarted","Data":"3a4afeb35f81cf781306791ae0a09b978ba7c72b8d2e768e24ba741869a6c770"} Feb 17 00:24:30 crc kubenswrapper[5109]: I0217 00:24:30.081798 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs" event={"ID":"e8e44638-f54f-4881-86bc-4d0f985613cb","Type":"ContainerStarted","Data":"fba0a88b4db5c9627129cff42e14e66eae9833b0fa58724137e3a7a5b762853f"} Feb 17 00:24:30 crc kubenswrapper[5109]: I0217 00:24:30.083240 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-647df7d596-n5brv" event={"ID":"fcd429e4-9a3c-42da-9b85-664e59d6d2bd","Type":"ContainerStarted","Data":"2fbf71b3a0146c526600b908c6b6bba161aeac60b874c773f8e306f7fe197057"} Feb 17 00:24:30 crc kubenswrapper[5109]: I0217 00:24:30.799519 5109 patch_prober.go:28] interesting pod/machine-config-daemon-hjvm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:24:30 crc kubenswrapper[5109]: I0217 00:24:30.799568 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" podUID="5867f26a-eddd-4d0b-bfa3-e7c68e976330" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:24:30 crc kubenswrapper[5109]: I0217 00:24:30.799628 5109 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" Feb 17 00:24:30 crc kubenswrapper[5109]: I0217 00:24:30.800100 5109 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d5d7ec8c550e7e2cfc407a940fdcc36fdc2c2f34ba89176aa04f58fa822b9c35"} pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 00:24:30 crc kubenswrapper[5109]: I0217 00:24:30.800158 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" podUID="5867f26a-eddd-4d0b-bfa3-e7c68e976330" containerName="machine-config-daemon" containerID="cri-o://d5d7ec8c550e7e2cfc407a940fdcc36fdc2c2f34ba89176aa04f58fa822b9c35" gracePeriod=600 Feb 17 00:24:31 crc kubenswrapper[5109]: I0217 00:24:31.093145 5109 generic.go:358] "Generic (PLEG): container finished" podID="5867f26a-eddd-4d0b-bfa3-e7c68e976330" containerID="d5d7ec8c550e7e2cfc407a940fdcc36fdc2c2f34ba89176aa04f58fa822b9c35" exitCode=0 Feb 17 00:24:31 crc kubenswrapper[5109]: I0217 00:24:31.093217 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" event={"ID":"5867f26a-eddd-4d0b-bfa3-e7c68e976330","Type":"ContainerDied","Data":"d5d7ec8c550e7e2cfc407a940fdcc36fdc2c2f34ba89176aa04f58fa822b9c35"} Feb 17 00:24:31 crc kubenswrapper[5109]: I0217 00:24:31.093636 5109 scope.go:117] "RemoveContainer" containerID="4240edc1c6cdc8427405aa2a8b83638ea6ac630ece6a4d9c0ad1bed15963e71f" Feb 17 00:24:38 crc kubenswrapper[5109]: I0217 00:24:38.174212 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" event={"ID":"5867f26a-eddd-4d0b-bfa3-e7c68e976330","Type":"ContainerStarted","Data":"baef34204e4f9db8d16df2c382a110a258b0baf62af5a457595fac5d1746cfb4"} Feb 17 00:24:39 crc kubenswrapper[5109]: I0217 00:24:39.184041 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g" event={"ID":"e691a630-aefc-4763-9b47-1ce96aac5fa7","Type":"ContainerStarted","Data":"b479ef08924b19ceeaabc54c27fe002a20c41df3ccde3fca75c9649143adb794"} Feb 17 00:24:39 crc kubenswrapper[5109]: I0217 00:24:39.187015 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx" event={"ID":"061c1a1b-358c-46f3-818a-3531ced45ab0","Type":"ContainerStarted","Data":"43dad01ab2e8c4998b822d8bf9bbeaa746392c00e14ad2d139fd97045f919a8e"} Feb 17 00:24:39 crc kubenswrapper[5109]: I0217 00:24:39.190122 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs" event={"ID":"e8e44638-f54f-4881-86bc-4d0f985613cb","Type":"ContainerStarted","Data":"db18927ba77b1dc764dee13b5047cf7dc87f29bbd623f6535eac333a8d72ad3f"} Feb 17 00:24:39 crc kubenswrapper[5109]: I0217 00:24:39.191886 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-647df7d596-n5brv" event={"ID":"fcd429e4-9a3c-42da-9b85-664e59d6d2bd","Type":"ContainerStarted","Data":"b8c88c3ed967de0a58b116dc644bbb644e051b29bcbae113d737694af0d210da"} Feb 17 00:24:39 crc kubenswrapper[5109]: I0217 00:24:39.194359 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb" event={"ID":"5970fe16-c4c2-4d25-9d35-f850635f1a63","Type":"ContainerStarted","Data":"65e573bd6496e3d775b4f2ce345b76ca4cdfb111ff4536289f6e9134985ca983"} Feb 17 00:24:39 crc kubenswrapper[5109]: I0217 00:24:39.207216 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g" podStartSLOduration=5.925974629 podStartE2EDuration="25.207197852s" podCreationTimestamp="2026-02-17 00:24:14 +0000 UTC" firstStartedPulling="2026-02-17 00:24:18.636020219 +0000 UTC m=+929.967574977" lastFinishedPulling="2026-02-17 00:24:37.917243432 +0000 UTC m=+949.248798200" observedRunningTime="2026-02-17 00:24:39.205332113 +0000 UTC m=+950.536886881" watchObservedRunningTime="2026-02-17 00:24:39.207197852 +0000 UTC m=+950.538752620" Feb 17 00:24:39 crc kubenswrapper[5109]: I0217 00:24:39.240332 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx" podStartSLOduration=4.514924644 podStartE2EDuration="20.240310371s" podCreationTimestamp="2026-02-17 00:24:19 +0000 UTC" firstStartedPulling="2026-02-17 00:24:22.142526076 +0000 UTC m=+933.474080874" lastFinishedPulling="2026-02-17 00:24:37.867911833 +0000 UTC m=+949.199466601" observedRunningTime="2026-02-17 00:24:39.232580716 +0000 UTC m=+950.564135484" watchObservedRunningTime="2026-02-17 00:24:39.240310371 +0000 UTC m=+950.571865139" Feb 17 00:24:39 crc kubenswrapper[5109]: I0217 00:24:39.258629 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb" podStartSLOduration=4.085494283 podStartE2EDuration="23.258607757s" podCreationTimestamp="2026-02-17 00:24:16 +0000 UTC" firstStartedPulling="2026-02-17 00:24:18.703407397 +0000 UTC m=+930.034962155" lastFinishedPulling="2026-02-17 00:24:37.876520861 +0000 UTC m=+949.208075629" observedRunningTime="2026-02-17 00:24:39.257254561 +0000 UTC m=+950.588809349" watchObservedRunningTime="2026-02-17 00:24:39.258607757 +0000 UTC m=+950.590162515" Feb 17 00:24:39 crc kubenswrapper[5109]: I0217 00:24:39.298769 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-647df7d596-n5brv" podStartSLOduration=4.198207036 podStartE2EDuration="13.298739202s" podCreationTimestamp="2026-02-17 00:24:26 +0000 UTC" firstStartedPulling="2026-02-17 00:24:28.761966773 +0000 UTC m=+940.093521531" lastFinishedPulling="2026-02-17 00:24:37.862498929 +0000 UTC m=+949.194053697" observedRunningTime="2026-02-17 00:24:39.278131905 +0000 UTC m=+950.609686693" watchObservedRunningTime="2026-02-17 00:24:39.298739202 +0000 UTC m=+950.630293950" Feb 17 00:24:39 crc kubenswrapper[5109]: I0217 00:24:39.613052 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs" podStartSLOduration=3.2962704990000002 podStartE2EDuration="12.613030945s" podCreationTimestamp="2026-02-17 00:24:27 +0000 UTC" firstStartedPulling="2026-02-17 00:24:28.665168143 +0000 UTC m=+939.996722901" lastFinishedPulling="2026-02-17 00:24:37.981928549 +0000 UTC m=+949.313483347" observedRunningTime="2026-02-17 00:24:39.298462105 +0000 UTC m=+950.630016873" watchObservedRunningTime="2026-02-17 00:24:39.613030945 +0000 UTC m=+950.944585713" Feb 17 00:24:39 crc kubenswrapper[5109]: I0217 00:24:39.613987 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-wm65j"] Feb 17 00:24:39 crc kubenswrapper[5109]: I0217 00:24:39.614250 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/default-interconnect-55bf8d5cb-wm65j" podUID="d8eb5216-1a79-41e6-bda2-411d232e6e56" containerName="default-interconnect" containerID="cri-o://44baa2c92d2aee955a267a7e7725f98c860d04a6774fba57927abad0ec28552d" gracePeriod=30 Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.030541 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-wm65j" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.069400 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-n4sn7"] Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.070374 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d8eb5216-1a79-41e6-bda2-411d232e6e56" containerName="default-interconnect" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.070402 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8eb5216-1a79-41e6-bda2-411d232e6e56" containerName="default-interconnect" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.070583 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="d8eb5216-1a79-41e6-bda2-411d232e6e56" containerName="default-interconnect" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.074632 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8fph\" (UniqueName: \"kubernetes.io/projected/d8eb5216-1a79-41e6-bda2-411d232e6e56-kube-api-access-p8fph\") pod \"d8eb5216-1a79-41e6-bda2-411d232e6e56\" (UID: \"d8eb5216-1a79-41e6-bda2-411d232e6e56\") " Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.074739 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/d8eb5216-1a79-41e6-bda2-411d232e6e56-default-interconnect-inter-router-ca\") pod \"d8eb5216-1a79-41e6-bda2-411d232e6e56\" (UID: \"d8eb5216-1a79-41e6-bda2-411d232e6e56\") " Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.074805 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/d8eb5216-1a79-41e6-bda2-411d232e6e56-default-interconnect-openstack-ca\") pod \"d8eb5216-1a79-41e6-bda2-411d232e6e56\" (UID: \"d8eb5216-1a79-41e6-bda2-411d232e6e56\") " Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.074878 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/d8eb5216-1a79-41e6-bda2-411d232e6e56-sasl-config\") pod \"d8eb5216-1a79-41e6-bda2-411d232e6e56\" (UID: \"d8eb5216-1a79-41e6-bda2-411d232e6e56\") " Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.074911 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/d8eb5216-1a79-41e6-bda2-411d232e6e56-default-interconnect-openstack-credentials\") pod \"d8eb5216-1a79-41e6-bda2-411d232e6e56\" (UID: \"d8eb5216-1a79-41e6-bda2-411d232e6e56\") " Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.075682 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8eb5216-1a79-41e6-bda2-411d232e6e56-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "d8eb5216-1a79-41e6-bda2-411d232e6e56" (UID: "d8eb5216-1a79-41e6-bda2-411d232e6e56"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.075695 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/d8eb5216-1a79-41e6-bda2-411d232e6e56-default-interconnect-inter-router-credentials\") pod \"d8eb5216-1a79-41e6-bda2-411d232e6e56\" (UID: \"d8eb5216-1a79-41e6-bda2-411d232e6e56\") " Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.075831 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/d8eb5216-1a79-41e6-bda2-411d232e6e56-sasl-users\") pod \"d8eb5216-1a79-41e6-bda2-411d232e6e56\" (UID: \"d8eb5216-1a79-41e6-bda2-411d232e6e56\") " Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.076392 5109 reconciler_common.go:299] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/d8eb5216-1a79-41e6-bda2-411d232e6e56-sasl-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.082339 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8eb5216-1a79-41e6-bda2-411d232e6e56-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "d8eb5216-1a79-41e6-bda2-411d232e6e56" (UID: "d8eb5216-1a79-41e6-bda2-411d232e6e56"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.082358 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8eb5216-1a79-41e6-bda2-411d232e6e56-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "d8eb5216-1a79-41e6-bda2-411d232e6e56" (UID: "d8eb5216-1a79-41e6-bda2-411d232e6e56"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.083708 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8eb5216-1a79-41e6-bda2-411d232e6e56-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "d8eb5216-1a79-41e6-bda2-411d232e6e56" (UID: "d8eb5216-1a79-41e6-bda2-411d232e6e56"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.085650 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8eb5216-1a79-41e6-bda2-411d232e6e56-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "d8eb5216-1a79-41e6-bda2-411d232e6e56" (UID: "d8eb5216-1a79-41e6-bda2-411d232e6e56"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.087466 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8eb5216-1a79-41e6-bda2-411d232e6e56-kube-api-access-p8fph" (OuterVolumeSpecName: "kube-api-access-p8fph") pod "d8eb5216-1a79-41e6-bda2-411d232e6e56" (UID: "d8eb5216-1a79-41e6-bda2-411d232e6e56"). InnerVolumeSpecName "kube-api-access-p8fph". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.090069 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-n4sn7"] Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.090280 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-n4sn7" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.122748 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8eb5216-1a79-41e6-bda2-411d232e6e56-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "d8eb5216-1a79-41e6-bda2-411d232e6e56" (UID: "d8eb5216-1a79-41e6-bda2-411d232e6e56"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.177677 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/15c5eac4-7fc5-49ac-8ae2-b91c46080994-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-n4sn7\" (UID: \"15c5eac4-7fc5-49ac-8ae2-b91c46080994\") " pod="service-telemetry/default-interconnect-55bf8d5cb-n4sn7" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.177717 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/15c5eac4-7fc5-49ac-8ae2-b91c46080994-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-n4sn7\" (UID: \"15c5eac4-7fc5-49ac-8ae2-b91c46080994\") " pod="service-telemetry/default-interconnect-55bf8d5cb-n4sn7" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.177753 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/15c5eac4-7fc5-49ac-8ae2-b91c46080994-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-n4sn7\" (UID: \"15c5eac4-7fc5-49ac-8ae2-b91c46080994\") " pod="service-telemetry/default-interconnect-55bf8d5cb-n4sn7" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.177842 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/15c5eac4-7fc5-49ac-8ae2-b91c46080994-sasl-users\") pod \"default-interconnect-55bf8d5cb-n4sn7\" (UID: \"15c5eac4-7fc5-49ac-8ae2-b91c46080994\") " pod="service-telemetry/default-interconnect-55bf8d5cb-n4sn7" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.177904 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/15c5eac4-7fc5-49ac-8ae2-b91c46080994-sasl-config\") pod \"default-interconnect-55bf8d5cb-n4sn7\" (UID: \"15c5eac4-7fc5-49ac-8ae2-b91c46080994\") " pod="service-telemetry/default-interconnect-55bf8d5cb-n4sn7" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.177924 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/15c5eac4-7fc5-49ac-8ae2-b91c46080994-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-n4sn7\" (UID: \"15c5eac4-7fc5-49ac-8ae2-b91c46080994\") " pod="service-telemetry/default-interconnect-55bf8d5cb-n4sn7" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.178001 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qzsb\" (UniqueName: \"kubernetes.io/projected/15c5eac4-7fc5-49ac-8ae2-b91c46080994-kube-api-access-9qzsb\") pod \"default-interconnect-55bf8d5cb-n4sn7\" (UID: \"15c5eac4-7fc5-49ac-8ae2-b91c46080994\") " pod="service-telemetry/default-interconnect-55bf8d5cb-n4sn7" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.178062 5109 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/d8eb5216-1a79-41e6-bda2-411d232e6e56-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.178076 5109 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/d8eb5216-1a79-41e6-bda2-411d232e6e56-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.178086 5109 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/d8eb5216-1a79-41e6-bda2-411d232e6e56-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.178116 5109 reconciler_common.go:299] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/d8eb5216-1a79-41e6-bda2-411d232e6e56-sasl-users\") on node \"crc\" DevicePath \"\"" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.178135 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p8fph\" (UniqueName: \"kubernetes.io/projected/d8eb5216-1a79-41e6-bda2-411d232e6e56-kube-api-access-p8fph\") on node \"crc\" DevicePath \"\"" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.178151 5109 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/d8eb5216-1a79-41e6-bda2-411d232e6e56-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.202972 5109 generic.go:358] "Generic (PLEG): container finished" podID="e691a630-aefc-4763-9b47-1ce96aac5fa7" containerID="97792276675b16e1b39644e0c774f6c761ac0c17143ef4b0b624ba18badbd5ec" exitCode=0 Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.203041 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g" event={"ID":"e691a630-aefc-4763-9b47-1ce96aac5fa7","Type":"ContainerDied","Data":"97792276675b16e1b39644e0c774f6c761ac0c17143ef4b0b624ba18badbd5ec"} Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.203490 5109 scope.go:117] "RemoveContainer" containerID="97792276675b16e1b39644e0c774f6c761ac0c17143ef4b0b624ba18badbd5ec" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.232450 5109 generic.go:358] "Generic (PLEG): container finished" podID="061c1a1b-358c-46f3-818a-3531ced45ab0" containerID="29e00f964dc1a86c667733325e631cb282810b17fa034d0bf1f9da4302330251" exitCode=0 Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.232705 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx" event={"ID":"061c1a1b-358c-46f3-818a-3531ced45ab0","Type":"ContainerDied","Data":"29e00f964dc1a86c667733325e631cb282810b17fa034d0bf1f9da4302330251"} Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.233136 5109 scope.go:117] "RemoveContainer" containerID="29e00f964dc1a86c667733325e631cb282810b17fa034d0bf1f9da4302330251" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.236910 5109 generic.go:358] "Generic (PLEG): container finished" podID="5970fe16-c4c2-4d25-9d35-f850635f1a63" containerID="97ed8b0a337137d559f5e357cc6da1a9c73080ef294996bf27bd1377234eeb93" exitCode=0 Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.237069 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb" event={"ID":"5970fe16-c4c2-4d25-9d35-f850635f1a63","Type":"ContainerDied","Data":"97ed8b0a337137d559f5e357cc6da1a9c73080ef294996bf27bd1377234eeb93"} Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.237828 5109 scope.go:117] "RemoveContainer" containerID="97ed8b0a337137d559f5e357cc6da1a9c73080ef294996bf27bd1377234eeb93" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.239301 5109 generic.go:358] "Generic (PLEG): container finished" podID="d8eb5216-1a79-41e6-bda2-411d232e6e56" containerID="44baa2c92d2aee955a267a7e7725f98c860d04a6774fba57927abad0ec28552d" exitCode=0 Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.239743 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-wm65j" event={"ID":"d8eb5216-1a79-41e6-bda2-411d232e6e56","Type":"ContainerDied","Data":"44baa2c92d2aee955a267a7e7725f98c860d04a6774fba57927abad0ec28552d"} Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.239797 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-wm65j" event={"ID":"d8eb5216-1a79-41e6-bda2-411d232e6e56","Type":"ContainerDied","Data":"bb53c9629b34c93de50a3c9ef55a3e930ab59444ba36d389c44ac3b746d49ce0"} Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.239819 5109 scope.go:117] "RemoveContainer" containerID="44baa2c92d2aee955a267a7e7725f98c860d04a6774fba57927abad0ec28552d" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.239992 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-wm65j" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.279881 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/15c5eac4-7fc5-49ac-8ae2-b91c46080994-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-n4sn7\" (UID: \"15c5eac4-7fc5-49ac-8ae2-b91c46080994\") " pod="service-telemetry/default-interconnect-55bf8d5cb-n4sn7" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.279921 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/15c5eac4-7fc5-49ac-8ae2-b91c46080994-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-n4sn7\" (UID: \"15c5eac4-7fc5-49ac-8ae2-b91c46080994\") " pod="service-telemetry/default-interconnect-55bf8d5cb-n4sn7" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.279978 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/15c5eac4-7fc5-49ac-8ae2-b91c46080994-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-n4sn7\" (UID: \"15c5eac4-7fc5-49ac-8ae2-b91c46080994\") " pod="service-telemetry/default-interconnect-55bf8d5cb-n4sn7" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.280012 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/15c5eac4-7fc5-49ac-8ae2-b91c46080994-sasl-users\") pod \"default-interconnect-55bf8d5cb-n4sn7\" (UID: \"15c5eac4-7fc5-49ac-8ae2-b91c46080994\") " pod="service-telemetry/default-interconnect-55bf8d5cb-n4sn7" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.280193 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/15c5eac4-7fc5-49ac-8ae2-b91c46080994-sasl-config\") pod \"default-interconnect-55bf8d5cb-n4sn7\" (UID: \"15c5eac4-7fc5-49ac-8ae2-b91c46080994\") " pod="service-telemetry/default-interconnect-55bf8d5cb-n4sn7" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.280213 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/15c5eac4-7fc5-49ac-8ae2-b91c46080994-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-n4sn7\" (UID: \"15c5eac4-7fc5-49ac-8ae2-b91c46080994\") " pod="service-telemetry/default-interconnect-55bf8d5cb-n4sn7" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.280243 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9qzsb\" (UniqueName: \"kubernetes.io/projected/15c5eac4-7fc5-49ac-8ae2-b91c46080994-kube-api-access-9qzsb\") pod \"default-interconnect-55bf8d5cb-n4sn7\" (UID: \"15c5eac4-7fc5-49ac-8ae2-b91c46080994\") " pod="service-telemetry/default-interconnect-55bf8d5cb-n4sn7" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.283105 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/15c5eac4-7fc5-49ac-8ae2-b91c46080994-sasl-config\") pod \"default-interconnect-55bf8d5cb-n4sn7\" (UID: \"15c5eac4-7fc5-49ac-8ae2-b91c46080994\") " pod="service-telemetry/default-interconnect-55bf8d5cb-n4sn7" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.289198 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/15c5eac4-7fc5-49ac-8ae2-b91c46080994-sasl-users\") pod \"default-interconnect-55bf8d5cb-n4sn7\" (UID: \"15c5eac4-7fc5-49ac-8ae2-b91c46080994\") " pod="service-telemetry/default-interconnect-55bf8d5cb-n4sn7" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.289863 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/15c5eac4-7fc5-49ac-8ae2-b91c46080994-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-n4sn7\" (UID: \"15c5eac4-7fc5-49ac-8ae2-b91c46080994\") " pod="service-telemetry/default-interconnect-55bf8d5cb-n4sn7" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.290798 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/15c5eac4-7fc5-49ac-8ae2-b91c46080994-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-n4sn7\" (UID: \"15c5eac4-7fc5-49ac-8ae2-b91c46080994\") " pod="service-telemetry/default-interconnect-55bf8d5cb-n4sn7" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.291368 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/15c5eac4-7fc5-49ac-8ae2-b91c46080994-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-n4sn7\" (UID: \"15c5eac4-7fc5-49ac-8ae2-b91c46080994\") " pod="service-telemetry/default-interconnect-55bf8d5cb-n4sn7" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.293184 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/15c5eac4-7fc5-49ac-8ae2-b91c46080994-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-n4sn7\" (UID: \"15c5eac4-7fc5-49ac-8ae2-b91c46080994\") " pod="service-telemetry/default-interconnect-55bf8d5cb-n4sn7" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.293232 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-wm65j"] Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.296851 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-wm65j"] Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.304157 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qzsb\" (UniqueName: \"kubernetes.io/projected/15c5eac4-7fc5-49ac-8ae2-b91c46080994-kube-api-access-9qzsb\") pod \"default-interconnect-55bf8d5cb-n4sn7\" (UID: \"15c5eac4-7fc5-49ac-8ae2-b91c46080994\") " pod="service-telemetry/default-interconnect-55bf8d5cb-n4sn7" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.309708 5109 scope.go:117] "RemoveContainer" containerID="44baa2c92d2aee955a267a7e7725f98c860d04a6774fba57927abad0ec28552d" Feb 17 00:24:40 crc kubenswrapper[5109]: E0217 00:24:40.310026 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44baa2c92d2aee955a267a7e7725f98c860d04a6774fba57927abad0ec28552d\": container with ID starting with 44baa2c92d2aee955a267a7e7725f98c860d04a6774fba57927abad0ec28552d not found: ID does not exist" containerID="44baa2c92d2aee955a267a7e7725f98c860d04a6774fba57927abad0ec28552d" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.310048 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44baa2c92d2aee955a267a7e7725f98c860d04a6774fba57927abad0ec28552d"} err="failed to get container status \"44baa2c92d2aee955a267a7e7725f98c860d04a6774fba57927abad0ec28552d\": rpc error: code = NotFound desc = could not find container \"44baa2c92d2aee955a267a7e7725f98c860d04a6774fba57927abad0ec28552d\": container with ID starting with 44baa2c92d2aee955a267a7e7725f98c860d04a6774fba57927abad0ec28552d not found: ID does not exist" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.443748 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-n4sn7" Feb 17 00:24:40 crc kubenswrapper[5109]: I0217 00:24:40.906036 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-n4sn7"] Feb 17 00:24:40 crc kubenswrapper[5109]: W0217 00:24:40.928853 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15c5eac4_7fc5_49ac_8ae2_b91c46080994.slice/crio-fd9fcf3838e3975f6f42bdebb7c6bad4a692cf29a1a2cda0e4d3c5ef54ecbe1f WatchSource:0}: Error finding container fd9fcf3838e3975f6f42bdebb7c6bad4a692cf29a1a2cda0e4d3c5ef54ecbe1f: Status 404 returned error can't find the container with id fd9fcf3838e3975f6f42bdebb7c6bad4a692cf29a1a2cda0e4d3c5ef54ecbe1f Feb 17 00:24:41 crc kubenswrapper[5109]: I0217 00:24:41.248798 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g" event={"ID":"e691a630-aefc-4763-9b47-1ce96aac5fa7","Type":"ContainerStarted","Data":"cb5b36746b7a04e508f56adf3ca6c619590e91c94adeb25ad6cb5d0b14851f28"} Feb 17 00:24:41 crc kubenswrapper[5109]: I0217 00:24:41.251811 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx" event={"ID":"061c1a1b-358c-46f3-818a-3531ced45ab0","Type":"ContainerStarted","Data":"0197b4330f088385861904a80de71857553edd89ab700005b8edda7916ce93b9"} Feb 17 00:24:41 crc kubenswrapper[5109]: I0217 00:24:41.254475 5109 generic.go:358] "Generic (PLEG): container finished" podID="e8e44638-f54f-4881-86bc-4d0f985613cb" containerID="fba0a88b4db5c9627129cff42e14e66eae9833b0fa58724137e3a7a5b762853f" exitCode=0 Feb 17 00:24:41 crc kubenswrapper[5109]: I0217 00:24:41.254551 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs" event={"ID":"e8e44638-f54f-4881-86bc-4d0f985613cb","Type":"ContainerDied","Data":"fba0a88b4db5c9627129cff42e14e66eae9833b0fa58724137e3a7a5b762853f"} Feb 17 00:24:41 crc kubenswrapper[5109]: I0217 00:24:41.254867 5109 scope.go:117] "RemoveContainer" containerID="fba0a88b4db5c9627129cff42e14e66eae9833b0fa58724137e3a7a5b762853f" Feb 17 00:24:41 crc kubenswrapper[5109]: I0217 00:24:41.258213 5109 generic.go:358] "Generic (PLEG): container finished" podID="fcd429e4-9a3c-42da-9b85-664e59d6d2bd" containerID="2fbf71b3a0146c526600b908c6b6bba161aeac60b874c773f8e306f7fe197057" exitCode=0 Feb 17 00:24:41 crc kubenswrapper[5109]: I0217 00:24:41.258309 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-647df7d596-n5brv" event={"ID":"fcd429e4-9a3c-42da-9b85-664e59d6d2bd","Type":"ContainerDied","Data":"2fbf71b3a0146c526600b908c6b6bba161aeac60b874c773f8e306f7fe197057"} Feb 17 00:24:41 crc kubenswrapper[5109]: I0217 00:24:41.258632 5109 scope.go:117] "RemoveContainer" containerID="2fbf71b3a0146c526600b908c6b6bba161aeac60b874c773f8e306f7fe197057" Feb 17 00:24:41 crc kubenswrapper[5109]: I0217 00:24:41.262799 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb" event={"ID":"5970fe16-c4c2-4d25-9d35-f850635f1a63","Type":"ContainerStarted","Data":"7906e1740cab317b488b435261e91e5758ca23cc9a91de439091457e66d18e3a"} Feb 17 00:24:41 crc kubenswrapper[5109]: I0217 00:24:41.267195 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-n4sn7" event={"ID":"15c5eac4-7fc5-49ac-8ae2-b91c46080994","Type":"ContainerStarted","Data":"3230677c5514caec414963c6109999d83b4301edef03c49698e9ff61aa954ed9"} Feb 17 00:24:41 crc kubenswrapper[5109]: I0217 00:24:41.267253 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-n4sn7" event={"ID":"15c5eac4-7fc5-49ac-8ae2-b91c46080994","Type":"ContainerStarted","Data":"fd9fcf3838e3975f6f42bdebb7c6bad4a692cf29a1a2cda0e4d3c5ef54ecbe1f"} Feb 17 00:24:41 crc kubenswrapper[5109]: I0217 00:24:41.381311 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-n4sn7" podStartSLOduration=2.381295222 podStartE2EDuration="2.381295222s" podCreationTimestamp="2026-02-17 00:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:24:41.378522949 +0000 UTC m=+952.710077707" watchObservedRunningTime="2026-02-17 00:24:41.381295222 +0000 UTC m=+952.712849980" Feb 17 00:24:41 crc kubenswrapper[5109]: I0217 00:24:41.472301 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8eb5216-1a79-41e6-bda2-411d232e6e56" path="/var/lib/kubelet/pods/d8eb5216-1a79-41e6-bda2-411d232e6e56/volumes" Feb 17 00:24:42 crc kubenswrapper[5109]: I0217 00:24:42.278035 5109 generic.go:358] "Generic (PLEG): container finished" podID="e691a630-aefc-4763-9b47-1ce96aac5fa7" containerID="cb5b36746b7a04e508f56adf3ca6c619590e91c94adeb25ad6cb5d0b14851f28" exitCode=0 Feb 17 00:24:42 crc kubenswrapper[5109]: I0217 00:24:42.278078 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g" event={"ID":"e691a630-aefc-4763-9b47-1ce96aac5fa7","Type":"ContainerDied","Data":"cb5b36746b7a04e508f56adf3ca6c619590e91c94adeb25ad6cb5d0b14851f28"} Feb 17 00:24:42 crc kubenswrapper[5109]: I0217 00:24:42.278486 5109 scope.go:117] "RemoveContainer" containerID="97792276675b16e1b39644e0c774f6c761ac0c17143ef4b0b624ba18badbd5ec" Feb 17 00:24:42 crc kubenswrapper[5109]: I0217 00:24:42.278672 5109 scope.go:117] "RemoveContainer" containerID="cb5b36746b7a04e508f56adf3ca6c619590e91c94adeb25ad6cb5d0b14851f28" Feb 17 00:24:42 crc kubenswrapper[5109]: E0217 00:24:42.279208 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-787645d794-sdm8g_service-telemetry(e691a630-aefc-4763-9b47-1ce96aac5fa7)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g" podUID="e691a630-aefc-4763-9b47-1ce96aac5fa7" Feb 17 00:24:42 crc kubenswrapper[5109]: I0217 00:24:42.283689 5109 generic.go:358] "Generic (PLEG): container finished" podID="061c1a1b-358c-46f3-818a-3531ced45ab0" containerID="0197b4330f088385861904a80de71857553edd89ab700005b8edda7916ce93b9" exitCode=0 Feb 17 00:24:42 crc kubenswrapper[5109]: I0217 00:24:42.283772 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx" event={"ID":"061c1a1b-358c-46f3-818a-3531ced45ab0","Type":"ContainerDied","Data":"0197b4330f088385861904a80de71857553edd89ab700005b8edda7916ce93b9"} Feb 17 00:24:42 crc kubenswrapper[5109]: I0217 00:24:42.284150 5109 scope.go:117] "RemoveContainer" containerID="0197b4330f088385861904a80de71857553edd89ab700005b8edda7916ce93b9" Feb 17 00:24:42 crc kubenswrapper[5109]: E0217 00:24:42.284370 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx_service-telemetry(061c1a1b-358c-46f3-818a-3531ced45ab0)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx" podUID="061c1a1b-358c-46f3-818a-3531ced45ab0" Feb 17 00:24:42 crc kubenswrapper[5109]: I0217 00:24:42.288576 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs" event={"ID":"e8e44638-f54f-4881-86bc-4d0f985613cb","Type":"ContainerStarted","Data":"effc7effa1f1515126fabea01df67822e9925ace8e0edac95645cefaffe07c29"} Feb 17 00:24:42 crc kubenswrapper[5109]: I0217 00:24:42.292315 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-647df7d596-n5brv" event={"ID":"fcd429e4-9a3c-42da-9b85-664e59d6d2bd","Type":"ContainerStarted","Data":"87e188507cba4561f13b14b510fa84c9c562f9935d6b2aaa4bcb2e9fc48ee53b"} Feb 17 00:24:42 crc kubenswrapper[5109]: I0217 00:24:42.313269 5109 generic.go:358] "Generic (PLEG): container finished" podID="5970fe16-c4c2-4d25-9d35-f850635f1a63" containerID="7906e1740cab317b488b435261e91e5758ca23cc9a91de439091457e66d18e3a" exitCode=0 Feb 17 00:24:42 crc kubenswrapper[5109]: I0217 00:24:42.314873 5109 scope.go:117] "RemoveContainer" containerID="7906e1740cab317b488b435261e91e5758ca23cc9a91de439091457e66d18e3a" Feb 17 00:24:42 crc kubenswrapper[5109]: E0217 00:24:42.315164 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb_service-telemetry(5970fe16-c4c2-4d25-9d35-f850635f1a63)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb" podUID="5970fe16-c4c2-4d25-9d35-f850635f1a63" Feb 17 00:24:42 crc kubenswrapper[5109]: I0217 00:24:42.315523 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb" event={"ID":"5970fe16-c4c2-4d25-9d35-f850635f1a63","Type":"ContainerDied","Data":"7906e1740cab317b488b435261e91e5758ca23cc9a91de439091457e66d18e3a"} Feb 17 00:24:42 crc kubenswrapper[5109]: I0217 00:24:42.333779 5109 scope.go:117] "RemoveContainer" containerID="29e00f964dc1a86c667733325e631cb282810b17fa034d0bf1f9da4302330251" Feb 17 00:24:42 crc kubenswrapper[5109]: I0217 00:24:42.394253 5109 scope.go:117] "RemoveContainer" containerID="97ed8b0a337137d559f5e357cc6da1a9c73080ef294996bf27bd1377234eeb93" Feb 17 00:24:43 crc kubenswrapper[5109]: I0217 00:24:43.322301 5109 scope.go:117] "RemoveContainer" containerID="cb5b36746b7a04e508f56adf3ca6c619590e91c94adeb25ad6cb5d0b14851f28" Feb 17 00:24:43 crc kubenswrapper[5109]: E0217 00:24:43.323054 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-787645d794-sdm8g_service-telemetry(e691a630-aefc-4763-9b47-1ce96aac5fa7)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g" podUID="e691a630-aefc-4763-9b47-1ce96aac5fa7" Feb 17 00:24:43 crc kubenswrapper[5109]: I0217 00:24:43.323989 5109 scope.go:117] "RemoveContainer" containerID="0197b4330f088385861904a80de71857553edd89ab700005b8edda7916ce93b9" Feb 17 00:24:43 crc kubenswrapper[5109]: E0217 00:24:43.324246 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx_service-telemetry(061c1a1b-358c-46f3-818a-3531ced45ab0)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx" podUID="061c1a1b-358c-46f3-818a-3531ced45ab0" Feb 17 00:24:43 crc kubenswrapper[5109]: I0217 00:24:43.325797 5109 scope.go:117] "RemoveContainer" containerID="7906e1740cab317b488b435261e91e5758ca23cc9a91de439091457e66d18e3a" Feb 17 00:24:43 crc kubenswrapper[5109]: E0217 00:24:43.325968 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb_service-telemetry(5970fe16-c4c2-4d25-9d35-f850635f1a63)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb" podUID="5970fe16-c4c2-4d25-9d35-f850635f1a63" Feb 17 00:24:51 crc kubenswrapper[5109]: I0217 00:24:51.924235 5109 scope.go:117] "RemoveContainer" containerID="866e4b63ca465084f236cc7cce4793015ec72987da5434eb27a3b081be1d9394" Feb 17 00:24:54 crc kubenswrapper[5109]: I0217 00:24:54.464173 5109 scope.go:117] "RemoveContainer" containerID="cb5b36746b7a04e508f56adf3ca6c619590e91c94adeb25ad6cb5d0b14851f28" Feb 17 00:24:54 crc kubenswrapper[5109]: I0217 00:24:54.465158 5109 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 00:24:55 crc kubenswrapper[5109]: I0217 00:24:55.421264 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-sdm8g" event={"ID":"e691a630-aefc-4763-9b47-1ce96aac5fa7","Type":"ContainerStarted","Data":"59b5bb3bffaa826569b3a297f989e80839bf377b70d19aeb385bccde3f1a79e5"} Feb 17 00:24:56 crc kubenswrapper[5109]: I0217 00:24:56.464751 5109 scope.go:117] "RemoveContainer" containerID="7906e1740cab317b488b435261e91e5758ca23cc9a91de439091457e66d18e3a" Feb 17 00:24:57 crc kubenswrapper[5109]: I0217 00:24:57.442863 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb" event={"ID":"5970fe16-c4c2-4d25-9d35-f850635f1a63","Type":"ContainerStarted","Data":"899b7d0bfb1a154f0e36014dd63c56b92a6fedf638f2395006638f73e577eb43"} Feb 17 00:24:58 crc kubenswrapper[5109]: I0217 00:24:58.464080 5109 scope.go:117] "RemoveContainer" containerID="0197b4330f088385861904a80de71857553edd89ab700005b8edda7916ce93b9" Feb 17 00:24:59 crc kubenswrapper[5109]: I0217 00:24:59.460363 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx" event={"ID":"061c1a1b-358c-46f3-818a-3531ced45ab0","Type":"ContainerStarted","Data":"9a3aa122f38ec051e965e03c96d764ed5a3766d66edd70b7ce5080ca73bd6fc1"} Feb 17 00:25:11 crc kubenswrapper[5109]: I0217 00:25:11.334373 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Feb 17 00:25:11 crc kubenswrapper[5109]: I0217 00:25:11.374433 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Feb 17 00:25:11 crc kubenswrapper[5109]: I0217 00:25:11.374557 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Feb 17 00:25:11 crc kubenswrapper[5109]: I0217 00:25:11.378247 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"qdr-test-config\"" Feb 17 00:25:11 crc kubenswrapper[5109]: I0217 00:25:11.378345 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-selfsigned\"" Feb 17 00:25:11 crc kubenswrapper[5109]: I0217 00:25:11.485629 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/6e3fbcd4-b8bd-49a8-908d-6d531b3f563e-qdr-test-config\") pod \"qdr-test\" (UID: \"6e3fbcd4-b8bd-49a8-908d-6d531b3f563e\") " pod="service-telemetry/qdr-test" Feb 17 00:25:11 crc kubenswrapper[5109]: I0217 00:25:11.486059 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/6e3fbcd4-b8bd-49a8-908d-6d531b3f563e-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"6e3fbcd4-b8bd-49a8-908d-6d531b3f563e\") " pod="service-telemetry/qdr-test" Feb 17 00:25:11 crc kubenswrapper[5109]: I0217 00:25:11.486103 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bpzk\" (UniqueName: \"kubernetes.io/projected/6e3fbcd4-b8bd-49a8-908d-6d531b3f563e-kube-api-access-2bpzk\") pod \"qdr-test\" (UID: \"6e3fbcd4-b8bd-49a8-908d-6d531b3f563e\") " pod="service-telemetry/qdr-test" Feb 17 00:25:11 crc kubenswrapper[5109]: I0217 00:25:11.587972 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/6e3fbcd4-b8bd-49a8-908d-6d531b3f563e-qdr-test-config\") pod \"qdr-test\" (UID: \"6e3fbcd4-b8bd-49a8-908d-6d531b3f563e\") " pod="service-telemetry/qdr-test" Feb 17 00:25:11 crc kubenswrapper[5109]: I0217 00:25:11.588360 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/6e3fbcd4-b8bd-49a8-908d-6d531b3f563e-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"6e3fbcd4-b8bd-49a8-908d-6d531b3f563e\") " pod="service-telemetry/qdr-test" Feb 17 00:25:11 crc kubenswrapper[5109]: I0217 00:25:11.588433 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2bpzk\" (UniqueName: \"kubernetes.io/projected/6e3fbcd4-b8bd-49a8-908d-6d531b3f563e-kube-api-access-2bpzk\") pod \"qdr-test\" (UID: \"6e3fbcd4-b8bd-49a8-908d-6d531b3f563e\") " pod="service-telemetry/qdr-test" Feb 17 00:25:11 crc kubenswrapper[5109]: I0217 00:25:11.589176 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/6e3fbcd4-b8bd-49a8-908d-6d531b3f563e-qdr-test-config\") pod \"qdr-test\" (UID: \"6e3fbcd4-b8bd-49a8-908d-6d531b3f563e\") " pod="service-telemetry/qdr-test" Feb 17 00:25:11 crc kubenswrapper[5109]: I0217 00:25:11.596880 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/6e3fbcd4-b8bd-49a8-908d-6d531b3f563e-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"6e3fbcd4-b8bd-49a8-908d-6d531b3f563e\") " pod="service-telemetry/qdr-test" Feb 17 00:25:11 crc kubenswrapper[5109]: I0217 00:25:11.608460 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bpzk\" (UniqueName: \"kubernetes.io/projected/6e3fbcd4-b8bd-49a8-908d-6d531b3f563e-kube-api-access-2bpzk\") pod \"qdr-test\" (UID: \"6e3fbcd4-b8bd-49a8-908d-6d531b3f563e\") " pod="service-telemetry/qdr-test" Feb 17 00:25:11 crc kubenswrapper[5109]: I0217 00:25:11.705165 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Feb 17 00:25:11 crc kubenswrapper[5109]: I0217 00:25:11.991887 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Feb 17 00:25:12 crc kubenswrapper[5109]: W0217 00:25:12.021301 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e3fbcd4_b8bd_49a8_908d_6d531b3f563e.slice/crio-6774df1d87c86eaf98d11621bc9019973495adacde0a20b49a65960167ce0a0c WatchSource:0}: Error finding container 6774df1d87c86eaf98d11621bc9019973495adacde0a20b49a65960167ce0a0c: Status 404 returned error can't find the container with id 6774df1d87c86eaf98d11621bc9019973495adacde0a20b49a65960167ce0a0c Feb 17 00:25:12 crc kubenswrapper[5109]: I0217 00:25:12.552256 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"6e3fbcd4-b8bd-49a8-908d-6d531b3f563e","Type":"ContainerStarted","Data":"6774df1d87c86eaf98d11621bc9019973495adacde0a20b49a65960167ce0a0c"} Feb 17 00:25:19 crc kubenswrapper[5109]: I0217 00:25:19.617037 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"6e3fbcd4-b8bd-49a8-908d-6d531b3f563e","Type":"ContainerStarted","Data":"aa79971ec332ed518cf51fd3b2ec03b03acc054291594b5134b811a4fc111425"} Feb 17 00:25:19 crc kubenswrapper[5109]: I0217 00:25:19.638914 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=2.035978567 podStartE2EDuration="8.638880135s" podCreationTimestamp="2026-02-17 00:25:11 +0000 UTC" firstStartedPulling="2026-02-17 00:25:12.023784429 +0000 UTC m=+983.355339187" lastFinishedPulling="2026-02-17 00:25:18.626685987 +0000 UTC m=+989.958240755" observedRunningTime="2026-02-17 00:25:19.635468324 +0000 UTC m=+990.967023082" watchObservedRunningTime="2026-02-17 00:25:19.638880135 +0000 UTC m=+990.970434923" Feb 17 00:25:19 crc kubenswrapper[5109]: I0217 00:25:19.944218 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-f5xnl"] Feb 17 00:25:19 crc kubenswrapper[5109]: I0217 00:25:19.956971 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-f5xnl"] Feb 17 00:25:19 crc kubenswrapper[5109]: I0217 00:25:19.957137 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-f5xnl" Feb 17 00:25:19 crc kubenswrapper[5109]: I0217 00:25:19.961953 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-healthcheck-log\"" Feb 17 00:25:19 crc kubenswrapper[5109]: I0217 00:25:19.962038 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-entrypoint-script\"" Feb 17 00:25:19 crc kubenswrapper[5109]: I0217 00:25:19.961967 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-entrypoint-script\"" Feb 17 00:25:19 crc kubenswrapper[5109]: I0217 00:25:19.962891 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-sensubility-config\"" Feb 17 00:25:19 crc kubenswrapper[5109]: I0217 00:25:19.962926 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-publisher\"" Feb 17 00:25:19 crc kubenswrapper[5109]: I0217 00:25:19.963363 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-config\"" Feb 17 00:25:20 crc kubenswrapper[5109]: I0217 00:25:20.022130 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-f5xnl\" (UID: \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\") " pod="service-telemetry/stf-smoketest-smoke1-f5xnl" Feb 17 00:25:20 crc kubenswrapper[5109]: I0217 00:25:20.022275 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbdt9\" (UniqueName: \"kubernetes.io/projected/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-kube-api-access-hbdt9\") pod \"stf-smoketest-smoke1-f5xnl\" (UID: \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\") " pod="service-telemetry/stf-smoketest-smoke1-f5xnl" Feb 17 00:25:20 crc kubenswrapper[5109]: I0217 00:25:20.022428 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-ceilometer-publisher\") pod \"stf-smoketest-smoke1-f5xnl\" (UID: \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\") " pod="service-telemetry/stf-smoketest-smoke1-f5xnl" Feb 17 00:25:20 crc kubenswrapper[5109]: I0217 00:25:20.022551 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-collectd-config\") pod \"stf-smoketest-smoke1-f5xnl\" (UID: \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\") " pod="service-telemetry/stf-smoketest-smoke1-f5xnl" Feb 17 00:25:20 crc kubenswrapper[5109]: I0217 00:25:20.022651 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-sensubility-config\") pod \"stf-smoketest-smoke1-f5xnl\" (UID: \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\") " pod="service-telemetry/stf-smoketest-smoke1-f5xnl" Feb 17 00:25:20 crc kubenswrapper[5109]: I0217 00:25:20.022745 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-f5xnl\" (UID: \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\") " pod="service-telemetry/stf-smoketest-smoke1-f5xnl" Feb 17 00:25:20 crc kubenswrapper[5109]: I0217 00:25:20.022808 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-healthcheck-log\") pod \"stf-smoketest-smoke1-f5xnl\" (UID: \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\") " pod="service-telemetry/stf-smoketest-smoke1-f5xnl" Feb 17 00:25:20 crc kubenswrapper[5109]: I0217 00:25:20.124473 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-sensubility-config\") pod \"stf-smoketest-smoke1-f5xnl\" (UID: \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\") " pod="service-telemetry/stf-smoketest-smoke1-f5xnl" Feb 17 00:25:20 crc kubenswrapper[5109]: I0217 00:25:20.124560 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-f5xnl\" (UID: \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\") " pod="service-telemetry/stf-smoketest-smoke1-f5xnl" Feb 17 00:25:20 crc kubenswrapper[5109]: I0217 00:25:20.125706 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-f5xnl\" (UID: \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\") " pod="service-telemetry/stf-smoketest-smoke1-f5xnl" Feb 17 00:25:20 crc kubenswrapper[5109]: I0217 00:25:20.125773 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-healthcheck-log\") pod \"stf-smoketest-smoke1-f5xnl\" (UID: \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\") " pod="service-telemetry/stf-smoketest-smoke1-f5xnl" Feb 17 00:25:20 crc kubenswrapper[5109]: I0217 00:25:20.126536 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-healthcheck-log\") pod \"stf-smoketest-smoke1-f5xnl\" (UID: \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\") " pod="service-telemetry/stf-smoketest-smoke1-f5xnl" Feb 17 00:25:20 crc kubenswrapper[5109]: I0217 00:25:20.126583 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-f5xnl\" (UID: \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\") " pod="service-telemetry/stf-smoketest-smoke1-f5xnl" Feb 17 00:25:20 crc kubenswrapper[5109]: I0217 00:25:20.126669 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hbdt9\" (UniqueName: \"kubernetes.io/projected/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-kube-api-access-hbdt9\") pod \"stf-smoketest-smoke1-f5xnl\" (UID: \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\") " pod="service-telemetry/stf-smoketest-smoke1-f5xnl" Feb 17 00:25:20 crc kubenswrapper[5109]: I0217 00:25:20.126742 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-f5xnl\" (UID: \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\") " pod="service-telemetry/stf-smoketest-smoke1-f5xnl" Feb 17 00:25:20 crc kubenswrapper[5109]: I0217 00:25:20.127105 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-ceilometer-publisher\") pod \"stf-smoketest-smoke1-f5xnl\" (UID: \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\") " pod="service-telemetry/stf-smoketest-smoke1-f5xnl" Feb 17 00:25:20 crc kubenswrapper[5109]: I0217 00:25:20.127906 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-collectd-config\") pod \"stf-smoketest-smoke1-f5xnl\" (UID: \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\") " pod="service-telemetry/stf-smoketest-smoke1-f5xnl" Feb 17 00:25:20 crc kubenswrapper[5109]: I0217 00:25:20.127654 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-sensubility-config\") pod \"stf-smoketest-smoke1-f5xnl\" (UID: \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\") " pod="service-telemetry/stf-smoketest-smoke1-f5xnl" Feb 17 00:25:20 crc kubenswrapper[5109]: I0217 00:25:20.127855 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-ceilometer-publisher\") pod \"stf-smoketest-smoke1-f5xnl\" (UID: \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\") " pod="service-telemetry/stf-smoketest-smoke1-f5xnl" Feb 17 00:25:20 crc kubenswrapper[5109]: I0217 00:25:20.128548 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-collectd-config\") pod \"stf-smoketest-smoke1-f5xnl\" (UID: \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\") " pod="service-telemetry/stf-smoketest-smoke1-f5xnl" Feb 17 00:25:20 crc kubenswrapper[5109]: I0217 00:25:20.167896 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbdt9\" (UniqueName: \"kubernetes.io/projected/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-kube-api-access-hbdt9\") pod \"stf-smoketest-smoke1-f5xnl\" (UID: \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\") " pod="service-telemetry/stf-smoketest-smoke1-f5xnl" Feb 17 00:25:20 crc kubenswrapper[5109]: I0217 00:25:20.273213 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-f5xnl" Feb 17 00:25:20 crc kubenswrapper[5109]: I0217 00:25:20.392175 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Feb 17 00:25:20 crc kubenswrapper[5109]: I0217 00:25:20.401573 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 17 00:25:20 crc kubenswrapper[5109]: I0217 00:25:20.424763 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Feb 17 00:25:20 crc kubenswrapper[5109]: I0217 00:25:20.534390 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zppq7\" (UniqueName: \"kubernetes.io/projected/8f77f17f-91b2-496f-b105-53a85177c35e-kube-api-access-zppq7\") pod \"curl\" (UID: \"8f77f17f-91b2-496f-b105-53a85177c35e\") " pod="service-telemetry/curl" Feb 17 00:25:20 crc kubenswrapper[5109]: I0217 00:25:20.557395 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-f5xnl"] Feb 17 00:25:20 crc kubenswrapper[5109]: I0217 00:25:20.626076 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-f5xnl" event={"ID":"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0","Type":"ContainerStarted","Data":"59b5d97ded8eab66ec63ffbb1330ee13035f16231effc6cef3d3a2b4c8c77fbf"} Feb 17 00:25:20 crc kubenswrapper[5109]: I0217 00:25:20.636131 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zppq7\" (UniqueName: \"kubernetes.io/projected/8f77f17f-91b2-496f-b105-53a85177c35e-kube-api-access-zppq7\") pod \"curl\" (UID: \"8f77f17f-91b2-496f-b105-53a85177c35e\") " pod="service-telemetry/curl" Feb 17 00:25:20 crc kubenswrapper[5109]: I0217 00:25:20.658193 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zppq7\" (UniqueName: \"kubernetes.io/projected/8f77f17f-91b2-496f-b105-53a85177c35e-kube-api-access-zppq7\") pod \"curl\" (UID: \"8f77f17f-91b2-496f-b105-53a85177c35e\") " pod="service-telemetry/curl" Feb 17 00:25:20 crc kubenswrapper[5109]: I0217 00:25:20.738495 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 17 00:25:20 crc kubenswrapper[5109]: W0217 00:25:20.972219 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f77f17f_91b2_496f_b105_53a85177c35e.slice/crio-96d6549da9a91642c7b708ddb15cfbc42e8775f1625a525ecec459369b71f63d WatchSource:0}: Error finding container 96d6549da9a91642c7b708ddb15cfbc42e8775f1625a525ecec459369b71f63d: Status 404 returned error can't find the container with id 96d6549da9a91642c7b708ddb15cfbc42e8775f1625a525ecec459369b71f63d Feb 17 00:25:20 crc kubenswrapper[5109]: I0217 00:25:20.972689 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Feb 17 00:25:21 crc kubenswrapper[5109]: I0217 00:25:21.635623 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"8f77f17f-91b2-496f-b105-53a85177c35e","Type":"ContainerStarted","Data":"96d6549da9a91642c7b708ddb15cfbc42e8775f1625a525ecec459369b71f63d"} Feb 17 00:25:28 crc kubenswrapper[5109]: I0217 00:25:28.700473 5109 generic.go:358] "Generic (PLEG): container finished" podID="8f77f17f-91b2-496f-b105-53a85177c35e" containerID="18a38e460594668ecc4c12879209728e0619d78d9c8b8e5851e6c0c5385e44a7" exitCode=0 Feb 17 00:25:28 crc kubenswrapper[5109]: I0217 00:25:28.700549 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"8f77f17f-91b2-496f-b105-53a85177c35e","Type":"ContainerDied","Data":"18a38e460594668ecc4c12879209728e0619d78d9c8b8e5851e6c0c5385e44a7"} Feb 17 00:25:29 crc kubenswrapper[5109]: I0217 00:25:29.712025 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-f5xnl" event={"ID":"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0","Type":"ContainerStarted","Data":"e19a733a7d7a49f40db6fb07508341c2cd1c9c19fe7df67e0555dbf5fc2d915d"} Feb 17 00:25:30 crc kubenswrapper[5109]: I0217 00:25:30.032569 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 17 00:25:30 crc kubenswrapper[5109]: I0217 00:25:30.189551 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zppq7\" (UniqueName: \"kubernetes.io/projected/8f77f17f-91b2-496f-b105-53a85177c35e-kube-api-access-zppq7\") pod \"8f77f17f-91b2-496f-b105-53a85177c35e\" (UID: \"8f77f17f-91b2-496f-b105-53a85177c35e\") " Feb 17 00:25:30 crc kubenswrapper[5109]: I0217 00:25:30.197986 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f77f17f-91b2-496f-b105-53a85177c35e-kube-api-access-zppq7" (OuterVolumeSpecName: "kube-api-access-zppq7") pod "8f77f17f-91b2-496f-b105-53a85177c35e" (UID: "8f77f17f-91b2-496f-b105-53a85177c35e"). InnerVolumeSpecName "kube-api-access-zppq7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:25:30 crc kubenswrapper[5109]: I0217 00:25:30.245568 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_8f77f17f-91b2-496f-b105-53a85177c35e/curl/0.log" Feb 17 00:25:30 crc kubenswrapper[5109]: I0217 00:25:30.292364 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zppq7\" (UniqueName: \"kubernetes.io/projected/8f77f17f-91b2-496f-b105-53a85177c35e-kube-api-access-zppq7\") on node \"crc\" DevicePath \"\"" Feb 17 00:25:30 crc kubenswrapper[5109]: I0217 00:25:30.522584 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6774d8dfbc-pp685_192a2b06-be82-48ba-ba35-bb94271c00a4/prometheus-webhook-snmp/0.log" Feb 17 00:25:30 crc kubenswrapper[5109]: I0217 00:25:30.719801 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"8f77f17f-91b2-496f-b105-53a85177c35e","Type":"ContainerDied","Data":"96d6549da9a91642c7b708ddb15cfbc42e8775f1625a525ecec459369b71f63d"} Feb 17 00:25:30 crc kubenswrapper[5109]: I0217 00:25:30.719844 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96d6549da9a91642c7b708ddb15cfbc42e8775f1625a525ecec459369b71f63d" Feb 17 00:25:30 crc kubenswrapper[5109]: I0217 00:25:30.719909 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 17 00:25:35 crc kubenswrapper[5109]: I0217 00:25:35.767502 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-f5xnl" event={"ID":"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0","Type":"ContainerStarted","Data":"9b09c9ffb05a6df5c7eb4d50357ee0fa82c72b98a555b0b176fd71c0b39fd45d"} Feb 17 00:25:35 crc kubenswrapper[5109]: I0217 00:25:35.812825 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-f5xnl" podStartSLOduration=2.390608663 podStartE2EDuration="16.812800344s" podCreationTimestamp="2026-02-17 00:25:19 +0000 UTC" firstStartedPulling="2026-02-17 00:25:20.577628374 +0000 UTC m=+991.909183132" lastFinishedPulling="2026-02-17 00:25:34.999820045 +0000 UTC m=+1006.331374813" observedRunningTime="2026-02-17 00:25:35.798219977 +0000 UTC m=+1007.129774775" watchObservedRunningTime="2026-02-17 00:25:35.812800344 +0000 UTC m=+1007.144355142" Feb 17 00:26:00 crc kubenswrapper[5109]: I0217 00:26:00.148432 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29521466-6xj4r"] Feb 17 00:26:00 crc kubenswrapper[5109]: I0217 00:26:00.149541 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8f77f17f-91b2-496f-b105-53a85177c35e" containerName="curl" Feb 17 00:26:00 crc kubenswrapper[5109]: I0217 00:26:00.149553 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f77f17f-91b2-496f-b105-53a85177c35e" containerName="curl" Feb 17 00:26:00 crc kubenswrapper[5109]: I0217 00:26:00.149669 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="8f77f17f-91b2-496f-b105-53a85177c35e" containerName="curl" Feb 17 00:26:00 crc kubenswrapper[5109]: I0217 00:26:00.155636 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521466-6xj4r" Feb 17 00:26:00 crc kubenswrapper[5109]: I0217 00:26:00.159422 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-r4lwp\"" Feb 17 00:26:00 crc kubenswrapper[5109]: I0217 00:26:00.159568 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 17 00:26:00 crc kubenswrapper[5109]: I0217 00:26:00.160245 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 17 00:26:00 crc kubenswrapper[5109]: I0217 00:26:00.168462 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29521466-6xj4r"] Feb 17 00:26:00 crc kubenswrapper[5109]: I0217 00:26:00.192168 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7fmb\" (UniqueName: \"kubernetes.io/projected/f8309c7f-1781-421c-8599-950417fafbbd-kube-api-access-l7fmb\") pod \"auto-csr-approver-29521466-6xj4r\" (UID: \"f8309c7f-1781-421c-8599-950417fafbbd\") " pod="openshift-infra/auto-csr-approver-29521466-6xj4r" Feb 17 00:26:00 crc kubenswrapper[5109]: I0217 00:26:00.293821 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7fmb\" (UniqueName: \"kubernetes.io/projected/f8309c7f-1781-421c-8599-950417fafbbd-kube-api-access-l7fmb\") pod \"auto-csr-approver-29521466-6xj4r\" (UID: \"f8309c7f-1781-421c-8599-950417fafbbd\") " pod="openshift-infra/auto-csr-approver-29521466-6xj4r" Feb 17 00:26:00 crc kubenswrapper[5109]: I0217 00:26:00.325630 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7fmb\" (UniqueName: \"kubernetes.io/projected/f8309c7f-1781-421c-8599-950417fafbbd-kube-api-access-l7fmb\") pod \"auto-csr-approver-29521466-6xj4r\" (UID: \"f8309c7f-1781-421c-8599-950417fafbbd\") " pod="openshift-infra/auto-csr-approver-29521466-6xj4r" Feb 17 00:26:00 crc kubenswrapper[5109]: I0217 00:26:00.477647 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521466-6xj4r" Feb 17 00:26:00 crc kubenswrapper[5109]: I0217 00:26:00.701958 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6774d8dfbc-pp685_192a2b06-be82-48ba-ba35-bb94271c00a4/prometheus-webhook-snmp/0.log" Feb 17 00:26:00 crc kubenswrapper[5109]: I0217 00:26:00.753308 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29521466-6xj4r"] Feb 17 00:26:01 crc kubenswrapper[5109]: I0217 00:26:01.001434 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29521466-6xj4r" event={"ID":"f8309c7f-1781-421c-8599-950417fafbbd","Type":"ContainerStarted","Data":"d7818f3002ac0d3df6f4fe8f650cbf0d136ed01483c080d5cd81747fa4d71c5f"} Feb 17 00:26:03 crc kubenswrapper[5109]: I0217 00:26:03.036020 5109 generic.go:358] "Generic (PLEG): container finished" podID="f8309c7f-1781-421c-8599-950417fafbbd" containerID="afc2e436735be8f94faa13d739b9e68408398874e0229313081270e515a8362b" exitCode=0 Feb 17 00:26:03 crc kubenswrapper[5109]: I0217 00:26:03.036224 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29521466-6xj4r" event={"ID":"f8309c7f-1781-421c-8599-950417fafbbd","Type":"ContainerDied","Data":"afc2e436735be8f94faa13d739b9e68408398874e0229313081270e515a8362b"} Feb 17 00:26:04 crc kubenswrapper[5109]: I0217 00:26:04.057823 5109 generic.go:358] "Generic (PLEG): container finished" podID="7d23dc4c-f1c8-4c51-8ae9-a26307169bd0" containerID="e19a733a7d7a49f40db6fb07508341c2cd1c9c19fe7df67e0555dbf5fc2d915d" exitCode=0 Feb 17 00:26:04 crc kubenswrapper[5109]: I0217 00:26:04.057983 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-f5xnl" event={"ID":"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0","Type":"ContainerDied","Data":"e19a733a7d7a49f40db6fb07508341c2cd1c9c19fe7df67e0555dbf5fc2d915d"} Feb 17 00:26:04 crc kubenswrapper[5109]: I0217 00:26:04.058944 5109 scope.go:117] "RemoveContainer" containerID="e19a733a7d7a49f40db6fb07508341c2cd1c9c19fe7df67e0555dbf5fc2d915d" Feb 17 00:26:04 crc kubenswrapper[5109]: I0217 00:26:04.332279 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521466-6xj4r" Feb 17 00:26:04 crc kubenswrapper[5109]: I0217 00:26:04.474180 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7fmb\" (UniqueName: \"kubernetes.io/projected/f8309c7f-1781-421c-8599-950417fafbbd-kube-api-access-l7fmb\") pod \"f8309c7f-1781-421c-8599-950417fafbbd\" (UID: \"f8309c7f-1781-421c-8599-950417fafbbd\") " Feb 17 00:26:04 crc kubenswrapper[5109]: I0217 00:26:04.483487 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8309c7f-1781-421c-8599-950417fafbbd-kube-api-access-l7fmb" (OuterVolumeSpecName: "kube-api-access-l7fmb") pod "f8309c7f-1781-421c-8599-950417fafbbd" (UID: "f8309c7f-1781-421c-8599-950417fafbbd"). InnerVolumeSpecName "kube-api-access-l7fmb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:26:04 crc kubenswrapper[5109]: I0217 00:26:04.576029 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l7fmb\" (UniqueName: \"kubernetes.io/projected/f8309c7f-1781-421c-8599-950417fafbbd-kube-api-access-l7fmb\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:05 crc kubenswrapper[5109]: I0217 00:26:05.067488 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29521466-6xj4r" event={"ID":"f8309c7f-1781-421c-8599-950417fafbbd","Type":"ContainerDied","Data":"d7818f3002ac0d3df6f4fe8f650cbf0d136ed01483c080d5cd81747fa4d71c5f"} Feb 17 00:26:05 crc kubenswrapper[5109]: I0217 00:26:05.067681 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7818f3002ac0d3df6f4fe8f650cbf0d136ed01483c080d5cd81747fa4d71c5f" Feb 17 00:26:05 crc kubenswrapper[5109]: I0217 00:26:05.067562 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521466-6xj4r" Feb 17 00:26:05 crc kubenswrapper[5109]: I0217 00:26:05.407898 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29521460-x7q4k"] Feb 17 00:26:05 crc kubenswrapper[5109]: I0217 00:26:05.412325 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29521460-x7q4k"] Feb 17 00:26:05 crc kubenswrapper[5109]: I0217 00:26:05.474684 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e1ab308-fa27-446a-ab9d-98ab9b1ccb89" path="/var/lib/kubelet/pods/9e1ab308-fa27-446a-ab9d-98ab9b1ccb89/volumes" Feb 17 00:26:07 crc kubenswrapper[5109]: I0217 00:26:07.086025 5109 generic.go:358] "Generic (PLEG): container finished" podID="7d23dc4c-f1c8-4c51-8ae9-a26307169bd0" containerID="9b09c9ffb05a6df5c7eb4d50357ee0fa82c72b98a555b0b176fd71c0b39fd45d" exitCode=0 Feb 17 00:26:07 crc kubenswrapper[5109]: I0217 00:26:07.086259 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-f5xnl" event={"ID":"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0","Type":"ContainerDied","Data":"9b09c9ffb05a6df5c7eb4d50357ee0fa82c72b98a555b0b176fd71c0b39fd45d"} Feb 17 00:26:08 crc kubenswrapper[5109]: I0217 00:26:08.422994 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-f5xnl" Feb 17 00:26:08 crc kubenswrapper[5109]: I0217 00:26:08.550589 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbdt9\" (UniqueName: \"kubernetes.io/projected/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-kube-api-access-hbdt9\") pod \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\" (UID: \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\") " Feb 17 00:26:08 crc kubenswrapper[5109]: I0217 00:26:08.550750 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-collectd-entrypoint-script\") pod \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\" (UID: \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\") " Feb 17 00:26:08 crc kubenswrapper[5109]: I0217 00:26:08.550800 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-healthcheck-log\") pod \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\" (UID: \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\") " Feb 17 00:26:08 crc kubenswrapper[5109]: I0217 00:26:08.550911 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-sensubility-config\") pod \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\" (UID: \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\") " Feb 17 00:26:08 crc kubenswrapper[5109]: I0217 00:26:08.551010 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-collectd-config\") pod \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\" (UID: \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\") " Feb 17 00:26:08 crc kubenswrapper[5109]: I0217 00:26:08.552049 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-ceilometer-publisher\") pod \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\" (UID: \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\") " Feb 17 00:26:08 crc kubenswrapper[5109]: I0217 00:26:08.552245 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-ceilometer-entrypoint-script\") pod \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\" (UID: \"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0\") " Feb 17 00:26:08 crc kubenswrapper[5109]: I0217 00:26:08.558502 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-kube-api-access-hbdt9" (OuterVolumeSpecName: "kube-api-access-hbdt9") pod "7d23dc4c-f1c8-4c51-8ae9-a26307169bd0" (UID: "7d23dc4c-f1c8-4c51-8ae9-a26307169bd0"). InnerVolumeSpecName "kube-api-access-hbdt9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:26:08 crc kubenswrapper[5109]: I0217 00:26:08.576764 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "7d23dc4c-f1c8-4c51-8ae9-a26307169bd0" (UID: "7d23dc4c-f1c8-4c51-8ae9-a26307169bd0"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:26:08 crc kubenswrapper[5109]: I0217 00:26:08.578612 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "7d23dc4c-f1c8-4c51-8ae9-a26307169bd0" (UID: "7d23dc4c-f1c8-4c51-8ae9-a26307169bd0"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:26:08 crc kubenswrapper[5109]: I0217 00:26:08.581241 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "7d23dc4c-f1c8-4c51-8ae9-a26307169bd0" (UID: "7d23dc4c-f1c8-4c51-8ae9-a26307169bd0"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:26:08 crc kubenswrapper[5109]: I0217 00:26:08.581843 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "7d23dc4c-f1c8-4c51-8ae9-a26307169bd0" (UID: "7d23dc4c-f1c8-4c51-8ae9-a26307169bd0"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:26:08 crc kubenswrapper[5109]: I0217 00:26:08.583535 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "7d23dc4c-f1c8-4c51-8ae9-a26307169bd0" (UID: "7d23dc4c-f1c8-4c51-8ae9-a26307169bd0"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:26:08 crc kubenswrapper[5109]: I0217 00:26:08.584820 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "7d23dc4c-f1c8-4c51-8ae9-a26307169bd0" (UID: "7d23dc4c-f1c8-4c51-8ae9-a26307169bd0"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:26:08 crc kubenswrapper[5109]: I0217 00:26:08.655130 5109 reconciler_common.go:299] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:08 crc kubenswrapper[5109]: I0217 00:26:08.655210 5109 reconciler_common.go:299] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:08 crc kubenswrapper[5109]: I0217 00:26:08.655229 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hbdt9\" (UniqueName: \"kubernetes.io/projected/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-kube-api-access-hbdt9\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:08 crc kubenswrapper[5109]: I0217 00:26:08.655248 5109 reconciler_common.go:299] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:08 crc kubenswrapper[5109]: I0217 00:26:08.655263 5109 reconciler_common.go:299] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-healthcheck-log\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:08 crc kubenswrapper[5109]: I0217 00:26:08.655278 5109 reconciler_common.go:299] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-sensubility-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:08 crc kubenswrapper[5109]: I0217 00:26:08.655294 5109 reconciler_common.go:299] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/7d23dc4c-f1c8-4c51-8ae9-a26307169bd0-collectd-config\") on node \"crc\" DevicePath \"\"" Feb 17 00:26:09 crc kubenswrapper[5109]: I0217 00:26:09.111624 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-f5xnl" event={"ID":"7d23dc4c-f1c8-4c51-8ae9-a26307169bd0","Type":"ContainerDied","Data":"59b5d97ded8eab66ec63ffbb1330ee13035f16231effc6cef3d3a2b4c8c77fbf"} Feb 17 00:26:09 crc kubenswrapper[5109]: I0217 00:26:09.111721 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59b5d97ded8eab66ec63ffbb1330ee13035f16231effc6cef3d3a2b4c8c77fbf" Feb 17 00:26:09 crc kubenswrapper[5109]: I0217 00:26:09.111669 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-f5xnl" Feb 17 00:26:10 crc kubenswrapper[5109]: I0217 00:26:10.462915 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-f5xnl_7d23dc4c-f1c8-4c51-8ae9-a26307169bd0/smoketest-collectd/0.log" Feb 17 00:26:10 crc kubenswrapper[5109]: I0217 00:26:10.711113 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-f5xnl_7d23dc4c-f1c8-4c51-8ae9-a26307169bd0/smoketest-ceilometer/0.log" Feb 17 00:26:10 crc kubenswrapper[5109]: I0217 00:26:10.980921 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-55bf8d5cb-n4sn7_15c5eac4-7fc5-49ac-8ae2-b91c46080994/default-interconnect/0.log" Feb 17 00:26:11 crc kubenswrapper[5109]: I0217 00:26:11.240083 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-787645d794-sdm8g_e691a630-aefc-4763-9b47-1ce96aac5fa7/bridge/2.log" Feb 17 00:26:11 crc kubenswrapper[5109]: I0217 00:26:11.504928 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-787645d794-sdm8g_e691a630-aefc-4763-9b47-1ce96aac5fa7/sg-core/0.log" Feb 17 00:26:11 crc kubenswrapper[5109]: I0217 00:26:11.782758 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-647df7d596-n5brv_fcd429e4-9a3c-42da-9b85-664e59d6d2bd/bridge/1.log" Feb 17 00:26:12 crc kubenswrapper[5109]: I0217 00:26:12.063340 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-647df7d596-n5brv_fcd429e4-9a3c-42da-9b85-664e59d6d2bd/sg-core/0.log" Feb 17 00:26:12 crc kubenswrapper[5109]: I0217 00:26:12.353528 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb_5970fe16-c4c2-4d25-9d35-f850635f1a63/bridge/2.log" Feb 17 00:26:12 crc kubenswrapper[5109]: I0217 00:26:12.598899 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-545b564d9f-x6kcb_5970fe16-c4c2-4d25-9d35-f850635f1a63/sg-core/0.log" Feb 17 00:26:12 crc kubenswrapper[5109]: I0217 00:26:12.868777 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs_e8e44638-f54f-4881-86bc-4d0f985613cb/bridge/1.log" Feb 17 00:26:13 crc kubenswrapper[5109]: I0217 00:26:13.117534 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-56975d7867-p2pjs_e8e44638-f54f-4881-86bc-4d0f985613cb/sg-core/0.log" Feb 17 00:26:13 crc kubenswrapper[5109]: I0217 00:26:13.355306 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx_061c1a1b-358c-46f3-818a-3531ced45ab0/bridge/2.log" Feb 17 00:26:13 crc kubenswrapper[5109]: I0217 00:26:13.584148 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-66d5b7c5fc-7stvx_061c1a1b-358c-46f3-818a-3531ced45ab0/sg-core/0.log" Feb 17 00:26:16 crc kubenswrapper[5109]: I0217 00:26:16.470311 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-97b85656c-ssbdj_04580c9e-33df-4a1a-9ac8-6af08f9682ae/operator/0.log" Feb 17 00:26:16 crc kubenswrapper[5109]: I0217 00:26:16.809769 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_d92b9849-5e99-4939-abb4-27b6fe87adb3/prometheus/0.log" Feb 17 00:26:17 crc kubenswrapper[5109]: I0217 00:26:17.112226 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_dc9f509d-8e56-4537-b20e-a52f477f336f/elasticsearch/0.log" Feb 17 00:26:17 crc kubenswrapper[5109]: I0217 00:26:17.431800 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6774d8dfbc-pp685_192a2b06-be82-48ba-ba35-bb94271c00a4/prometheus-webhook-snmp/0.log" Feb 17 00:26:17 crc kubenswrapper[5109]: I0217 00:26:17.787217 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_637374b2-565d-4fbc-ba62-e151d5fef990/alertmanager/0.log" Feb 17 00:26:30 crc kubenswrapper[5109]: I0217 00:26:30.887360 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-794b5697c7-74dvh_af5ebf50-c291-4382-a449-c201099497ad/operator/0.log" Feb 17 00:26:34 crc kubenswrapper[5109]: I0217 00:26:34.074407 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-97b85656c-ssbdj_04580c9e-33df-4a1a-9ac8-6af08f9682ae/operator/0.log" Feb 17 00:26:34 crc kubenswrapper[5109]: I0217 00:26:34.354210 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_6e3fbcd4-b8bd-49a8-908d-6d531b3f563e/qdr/0.log" Feb 17 00:26:52 crc kubenswrapper[5109]: I0217 00:26:52.047034 5109 scope.go:117] "RemoveContainer" containerID="60064ce0f5c654fd28d5ecef8237e8187cd9603b608997f24e4ea2d831747923" Feb 17 00:26:59 crc kubenswrapper[5109]: I0217 00:26:59.124212 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-472sz/must-gather-dv6t5"] Feb 17 00:26:59 crc kubenswrapper[5109]: I0217 00:26:59.125428 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7d23dc4c-f1c8-4c51-8ae9-a26307169bd0" containerName="smoketest-ceilometer" Feb 17 00:26:59 crc kubenswrapper[5109]: I0217 00:26:59.125444 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d23dc4c-f1c8-4c51-8ae9-a26307169bd0" containerName="smoketest-ceilometer" Feb 17 00:26:59 crc kubenswrapper[5109]: I0217 00:26:59.125459 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f8309c7f-1781-421c-8599-950417fafbbd" containerName="oc" Feb 17 00:26:59 crc kubenswrapper[5109]: I0217 00:26:59.125469 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8309c7f-1781-421c-8599-950417fafbbd" containerName="oc" Feb 17 00:26:59 crc kubenswrapper[5109]: I0217 00:26:59.125491 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7d23dc4c-f1c8-4c51-8ae9-a26307169bd0" containerName="smoketest-collectd" Feb 17 00:26:59 crc kubenswrapper[5109]: I0217 00:26:59.125499 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d23dc4c-f1c8-4c51-8ae9-a26307169bd0" containerName="smoketest-collectd" Feb 17 00:26:59 crc kubenswrapper[5109]: I0217 00:26:59.125835 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="f8309c7f-1781-421c-8599-950417fafbbd" containerName="oc" Feb 17 00:26:59 crc kubenswrapper[5109]: I0217 00:26:59.125858 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="7d23dc4c-f1c8-4c51-8ae9-a26307169bd0" containerName="smoketest-collectd" Feb 17 00:26:59 crc kubenswrapper[5109]: I0217 00:26:59.125872 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="7d23dc4c-f1c8-4c51-8ae9-a26307169bd0" containerName="smoketest-ceilometer" Feb 17 00:26:59 crc kubenswrapper[5109]: I0217 00:26:59.137293 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-472sz/must-gather-dv6t5" Feb 17 00:26:59 crc kubenswrapper[5109]: I0217 00:26:59.140088 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-472sz/must-gather-dv6t5"] Feb 17 00:26:59 crc kubenswrapper[5109]: I0217 00:26:59.140262 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-472sz\"/\"default-dockercfg-xnxl8\"" Feb 17 00:26:59 crc kubenswrapper[5109]: I0217 00:26:59.142830 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-472sz\"/\"openshift-service-ca.crt\"" Feb 17 00:26:59 crc kubenswrapper[5109]: I0217 00:26:59.143519 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-472sz\"/\"kube-root-ca.crt\"" Feb 17 00:26:59 crc kubenswrapper[5109]: I0217 00:26:59.269823 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4-must-gather-output\") pod \"must-gather-dv6t5\" (UID: \"7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4\") " pod="openshift-must-gather-472sz/must-gather-dv6t5" Feb 17 00:26:59 crc kubenswrapper[5109]: I0217 00:26:59.269883 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gv6g\" (UniqueName: \"kubernetes.io/projected/7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4-kube-api-access-6gv6g\") pod \"must-gather-dv6t5\" (UID: \"7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4\") " pod="openshift-must-gather-472sz/must-gather-dv6t5" Feb 17 00:26:59 crc kubenswrapper[5109]: I0217 00:26:59.370959 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4-must-gather-output\") pod \"must-gather-dv6t5\" (UID: \"7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4\") " pod="openshift-must-gather-472sz/must-gather-dv6t5" Feb 17 00:26:59 crc kubenswrapper[5109]: I0217 00:26:59.371030 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6gv6g\" (UniqueName: \"kubernetes.io/projected/7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4-kube-api-access-6gv6g\") pod \"must-gather-dv6t5\" (UID: \"7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4\") " pod="openshift-must-gather-472sz/must-gather-dv6t5" Feb 17 00:26:59 crc kubenswrapper[5109]: I0217 00:26:59.372082 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4-must-gather-output\") pod \"must-gather-dv6t5\" (UID: \"7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4\") " pod="openshift-must-gather-472sz/must-gather-dv6t5" Feb 17 00:26:59 crc kubenswrapper[5109]: I0217 00:26:59.392885 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gv6g\" (UniqueName: \"kubernetes.io/projected/7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4-kube-api-access-6gv6g\") pod \"must-gather-dv6t5\" (UID: \"7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4\") " pod="openshift-must-gather-472sz/must-gather-dv6t5" Feb 17 00:26:59 crc kubenswrapper[5109]: I0217 00:26:59.464903 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-472sz/must-gather-dv6t5" Feb 17 00:26:59 crc kubenswrapper[5109]: I0217 00:26:59.744788 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-472sz/must-gather-dv6t5"] Feb 17 00:27:00 crc kubenswrapper[5109]: I0217 00:27:00.584939 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-472sz/must-gather-dv6t5" event={"ID":"7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4","Type":"ContainerStarted","Data":"67ca141ebc990af8ad64a4621bf217119f90ae15045b5e741098ab126057ff28"} Feb 17 00:27:00 crc kubenswrapper[5109]: I0217 00:27:00.799848 5109 patch_prober.go:28] interesting pod/machine-config-daemon-hjvm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:27:00 crc kubenswrapper[5109]: I0217 00:27:00.799924 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" podUID="5867f26a-eddd-4d0b-bfa3-e7c68e976330" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:27:05 crc kubenswrapper[5109]: I0217 00:27:05.630621 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-472sz/must-gather-dv6t5" event={"ID":"7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4","Type":"ContainerStarted","Data":"67fb9f059ab9c12cf333d65856c4d75e754e43639e5dede8122d5b956502e481"} Feb 17 00:27:05 crc kubenswrapper[5109]: I0217 00:27:05.630982 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-472sz/must-gather-dv6t5" event={"ID":"7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4","Type":"ContainerStarted","Data":"fdf2b4efc49bba2c8aeb43736fe23dbb9397404d7e083cbbd870d0ec52253f29"} Feb 17 00:27:05 crc kubenswrapper[5109]: I0217 00:27:05.654027 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-472sz/must-gather-dv6t5" podStartSLOduration=1.789267133 podStartE2EDuration="6.654007996s" podCreationTimestamp="2026-02-17 00:26:59 +0000 UTC" firstStartedPulling="2026-02-17 00:26:59.724058416 +0000 UTC m=+1091.055613184" lastFinishedPulling="2026-02-17 00:27:04.588799279 +0000 UTC m=+1095.920354047" observedRunningTime="2026-02-17 00:27:05.650821652 +0000 UTC m=+1096.982376440" watchObservedRunningTime="2026-02-17 00:27:05.654007996 +0000 UTC m=+1096.985562764" Feb 17 00:27:30 crc kubenswrapper[5109]: I0217 00:27:30.799898 5109 patch_prober.go:28] interesting pod/machine-config-daemon-hjvm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:27:30 crc kubenswrapper[5109]: I0217 00:27:30.800530 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" podUID="5867f26a-eddd-4d0b-bfa3-e7c68e976330" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:27:52 crc kubenswrapper[5109]: I0217 00:27:52.251342 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-cwkk6_99aef4f8-4236-448e-94bb-cca311ff5d9b/control-plane-machine-set-operator/0.log" Feb 17 00:27:52 crc kubenswrapper[5109]: I0217 00:27:52.400228 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-7l95k_2f13cd6d-3c3b-4ed8-b692-cfe56a634a19/machine-api-operator/0.log" Feb 17 00:27:52 crc kubenswrapper[5109]: I0217 00:27:52.411751 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-7l95k_2f13cd6d-3c3b-4ed8-b692-cfe56a634a19/kube-rbac-proxy/0.log" Feb 17 00:28:00 crc kubenswrapper[5109]: I0217 00:28:00.143685 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29521468-85tww"] Feb 17 00:28:00 crc kubenswrapper[5109]: I0217 00:28:00.153698 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29521468-85tww"] Feb 17 00:28:00 crc kubenswrapper[5109]: I0217 00:28:00.153834 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521468-85tww" Feb 17 00:28:00 crc kubenswrapper[5109]: I0217 00:28:00.157556 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 17 00:28:00 crc kubenswrapper[5109]: I0217 00:28:00.157797 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 17 00:28:00 crc kubenswrapper[5109]: I0217 00:28:00.157928 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-r4lwp\"" Feb 17 00:28:00 crc kubenswrapper[5109]: I0217 00:28:00.218834 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgpsq\" (UniqueName: \"kubernetes.io/projected/62739932-af5d-47b5-b959-41fd3a01ecaf-kube-api-access-hgpsq\") pod \"auto-csr-approver-29521468-85tww\" (UID: \"62739932-af5d-47b5-b959-41fd3a01ecaf\") " pod="openshift-infra/auto-csr-approver-29521468-85tww" Feb 17 00:28:00 crc kubenswrapper[5109]: I0217 00:28:00.320291 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hgpsq\" (UniqueName: \"kubernetes.io/projected/62739932-af5d-47b5-b959-41fd3a01ecaf-kube-api-access-hgpsq\") pod \"auto-csr-approver-29521468-85tww\" (UID: \"62739932-af5d-47b5-b959-41fd3a01ecaf\") " pod="openshift-infra/auto-csr-approver-29521468-85tww" Feb 17 00:28:00 crc kubenswrapper[5109]: I0217 00:28:00.346139 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgpsq\" (UniqueName: \"kubernetes.io/projected/62739932-af5d-47b5-b959-41fd3a01ecaf-kube-api-access-hgpsq\") pod \"auto-csr-approver-29521468-85tww\" (UID: \"62739932-af5d-47b5-b959-41fd3a01ecaf\") " pod="openshift-infra/auto-csr-approver-29521468-85tww" Feb 17 00:28:00 crc kubenswrapper[5109]: I0217 00:28:00.482329 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521468-85tww" Feb 17 00:28:00 crc kubenswrapper[5109]: I0217 00:28:00.720247 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29521468-85tww"] Feb 17 00:28:00 crc kubenswrapper[5109]: I0217 00:28:00.800491 5109 patch_prober.go:28] interesting pod/machine-config-daemon-hjvm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:28:00 crc kubenswrapper[5109]: I0217 00:28:00.800583 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" podUID="5867f26a-eddd-4d0b-bfa3-e7c68e976330" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:28:00 crc kubenswrapper[5109]: I0217 00:28:00.800741 5109 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" Feb 17 00:28:00 crc kubenswrapper[5109]: I0217 00:28:00.801753 5109 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"baef34204e4f9db8d16df2c382a110a258b0baf62af5a457595fac5d1746cfb4"} pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 00:28:00 crc kubenswrapper[5109]: I0217 00:28:00.801866 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" podUID="5867f26a-eddd-4d0b-bfa3-e7c68e976330" containerName="machine-config-daemon" containerID="cri-o://baef34204e4f9db8d16df2c382a110a258b0baf62af5a457595fac5d1746cfb4" gracePeriod=600 Feb 17 00:28:01 crc kubenswrapper[5109]: I0217 00:28:01.098877 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29521468-85tww" event={"ID":"62739932-af5d-47b5-b959-41fd3a01ecaf","Type":"ContainerStarted","Data":"139fb049eebb1e396fd9e1a423fa61d511a57e5f28ecbda53c9fe747fed9b373"} Feb 17 00:28:01 crc kubenswrapper[5109]: I0217 00:28:01.101955 5109 generic.go:358] "Generic (PLEG): container finished" podID="5867f26a-eddd-4d0b-bfa3-e7c68e976330" containerID="baef34204e4f9db8d16df2c382a110a258b0baf62af5a457595fac5d1746cfb4" exitCode=0 Feb 17 00:28:01 crc kubenswrapper[5109]: I0217 00:28:01.102057 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" event={"ID":"5867f26a-eddd-4d0b-bfa3-e7c68e976330","Type":"ContainerDied","Data":"baef34204e4f9db8d16df2c382a110a258b0baf62af5a457595fac5d1746cfb4"} Feb 17 00:28:01 crc kubenswrapper[5109]: I0217 00:28:01.102146 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" event={"ID":"5867f26a-eddd-4d0b-bfa3-e7c68e976330","Type":"ContainerStarted","Data":"2665f35a3de6e3966a805796059ff0ac96ef2591f89e2fbdecfb8369ece08498"} Feb 17 00:28:01 crc kubenswrapper[5109]: I0217 00:28:01.102173 5109 scope.go:117] "RemoveContainer" containerID="d5d7ec8c550e7e2cfc407a940fdcc36fdc2c2f34ba89176aa04f58fa822b9c35" Feb 17 00:28:02 crc kubenswrapper[5109]: I0217 00:28:02.110146 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29521468-85tww" event={"ID":"62739932-af5d-47b5-b959-41fd3a01ecaf","Type":"ContainerStarted","Data":"6d2fd711e794857bf33b85c595220e581e571e58e6d20bfd1300da873ea83241"} Feb 17 00:28:02 crc kubenswrapper[5109]: I0217 00:28:02.130641 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29521468-85tww" podStartSLOduration=1.096562865 podStartE2EDuration="2.130621911s" podCreationTimestamp="2026-02-17 00:28:00 +0000 UTC" firstStartedPulling="2026-02-17 00:28:00.738641101 +0000 UTC m=+1152.070195859" lastFinishedPulling="2026-02-17 00:28:01.772700147 +0000 UTC m=+1153.104254905" observedRunningTime="2026-02-17 00:28:02.126178734 +0000 UTC m=+1153.457733492" watchObservedRunningTime="2026-02-17 00:28:02.130621911 +0000 UTC m=+1153.462176669" Feb 17 00:28:03 crc kubenswrapper[5109]: I0217 00:28:03.123234 5109 generic.go:358] "Generic (PLEG): container finished" podID="62739932-af5d-47b5-b959-41fd3a01ecaf" containerID="6d2fd711e794857bf33b85c595220e581e571e58e6d20bfd1300da873ea83241" exitCode=0 Feb 17 00:28:03 crc kubenswrapper[5109]: I0217 00:28:03.123317 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29521468-85tww" event={"ID":"62739932-af5d-47b5-b959-41fd3a01ecaf","Type":"ContainerDied","Data":"6d2fd711e794857bf33b85c595220e581e571e58e6d20bfd1300da873ea83241"} Feb 17 00:28:04 crc kubenswrapper[5109]: I0217 00:28:04.492062 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521468-85tww" Feb 17 00:28:04 crc kubenswrapper[5109]: I0217 00:28:04.586139 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgpsq\" (UniqueName: \"kubernetes.io/projected/62739932-af5d-47b5-b959-41fd3a01ecaf-kube-api-access-hgpsq\") pod \"62739932-af5d-47b5-b959-41fd3a01ecaf\" (UID: \"62739932-af5d-47b5-b959-41fd3a01ecaf\") " Feb 17 00:28:04 crc kubenswrapper[5109]: I0217 00:28:04.596847 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62739932-af5d-47b5-b959-41fd3a01ecaf-kube-api-access-hgpsq" (OuterVolumeSpecName: "kube-api-access-hgpsq") pod "62739932-af5d-47b5-b959-41fd3a01ecaf" (UID: "62739932-af5d-47b5-b959-41fd3a01ecaf"). InnerVolumeSpecName "kube-api-access-hgpsq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:28:04 crc kubenswrapper[5109]: I0217 00:28:04.688873 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hgpsq\" (UniqueName: \"kubernetes.io/projected/62739932-af5d-47b5-b959-41fd3a01ecaf-kube-api-access-hgpsq\") on node \"crc\" DevicePath \"\"" Feb 17 00:28:05 crc kubenswrapper[5109]: I0217 00:28:05.142502 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29521468-85tww" event={"ID":"62739932-af5d-47b5-b959-41fd3a01ecaf","Type":"ContainerDied","Data":"139fb049eebb1e396fd9e1a423fa61d511a57e5f28ecbda53c9fe747fed9b373"} Feb 17 00:28:05 crc kubenswrapper[5109]: I0217 00:28:05.142865 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="139fb049eebb1e396fd9e1a423fa61d511a57e5f28ecbda53c9fe747fed9b373" Feb 17 00:28:05 crc kubenswrapper[5109]: I0217 00:28:05.142736 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521468-85tww" Feb 17 00:28:05 crc kubenswrapper[5109]: I0217 00:28:05.192060 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29521462-stpx6"] Feb 17 00:28:05 crc kubenswrapper[5109]: I0217 00:28:05.197267 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29521462-stpx6"] Feb 17 00:28:05 crc kubenswrapper[5109]: I0217 00:28:05.472009 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73c35e34-4f4c-48cd-81d5-3e75e02cc47a" path="/var/lib/kubelet/pods/73c35e34-4f4c-48cd-81d5-3e75e02cc47a/volumes" Feb 17 00:28:05 crc kubenswrapper[5109]: I0217 00:28:05.680185 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-759f64656b-h8g6s_e7f6207b-d886-4468-a19a-fa322372c3a1/cert-manager-controller/0.log" Feb 17 00:28:05 crc kubenswrapper[5109]: I0217 00:28:05.837047 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-8966b78d4-smnww_614ff088-aa03-4a19-bdd0-00cecbe79da4/cert-manager-cainjector/0.log" Feb 17 00:28:05 crc kubenswrapper[5109]: I0217 00:28:05.909182 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-597b96b99b-f5pbf_dca37b2b-eb0d-4835-beb6-a53dde67a2f8/cert-manager-webhook/0.log" Feb 17 00:28:21 crc kubenswrapper[5109]: I0217 00:28:21.138534 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-vvp9n_034835f5-e853-4070-9944-045206b1b990/prometheus-operator/0.log" Feb 17 00:28:21 crc kubenswrapper[5109]: I0217 00:28:21.277709 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5f8ff5bdf8-jwn5l_6877d358-6ca8-41b1-8eac-69453394b64b/prometheus-operator-admission-webhook/0.log" Feb 17 00:28:21 crc kubenswrapper[5109]: I0217 00:28:21.305712 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5f8ff5bdf8-mgg2t_642eed45-4b50-4af0-9656-27559798d21c/prometheus-operator-admission-webhook/0.log" Feb 17 00:28:21 crc kubenswrapper[5109]: I0217 00:28:21.462472 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-gtfv5_f517e4ec-97f5-4f18-8ea5-f1f3b37dd332/operator/0.log" Feb 17 00:28:21 crc kubenswrapper[5109]: I0217 00:28:21.485924 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-bgfhc_52985de1-df17-4d46-8b16-07bedfc870c0/perses-operator/0.log" Feb 17 00:28:36 crc kubenswrapper[5109]: I0217 00:28:36.531438 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277_1530a0bf-d290-47e8-9d2e-d85a94ff1983/util/0.log" Feb 17 00:28:36 crc kubenswrapper[5109]: I0217 00:28:36.671919 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277_1530a0bf-d290-47e8-9d2e-d85a94ff1983/pull/0.log" Feb 17 00:28:36 crc kubenswrapper[5109]: I0217 00:28:36.690713 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277_1530a0bf-d290-47e8-9d2e-d85a94ff1983/pull/0.log" Feb 17 00:28:36 crc kubenswrapper[5109]: I0217 00:28:36.738304 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277_1530a0bf-d290-47e8-9d2e-d85a94ff1983/util/0.log" Feb 17 00:28:36 crc kubenswrapper[5109]: I0217 00:28:36.884318 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277_1530a0bf-d290-47e8-9d2e-d85a94ff1983/pull/0.log" Feb 17 00:28:36 crc kubenswrapper[5109]: I0217 00:28:36.900340 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277_1530a0bf-d290-47e8-9d2e-d85a94ff1983/extract/0.log" Feb 17 00:28:36 crc kubenswrapper[5109]: I0217 00:28:36.902329 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1lx277_1530a0bf-d290-47e8-9d2e-d85a94ff1983/util/0.log" Feb 17 00:28:37 crc kubenswrapper[5109]: I0217 00:28:37.040876 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5_72c6aa0c-3846-4057-be15-63d2e9e5f270/util/0.log" Feb 17 00:28:37 crc kubenswrapper[5109]: I0217 00:28:37.268052 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5_72c6aa0c-3846-4057-be15-63d2e9e5f270/pull/0.log" Feb 17 00:28:37 crc kubenswrapper[5109]: I0217 00:28:37.281780 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5_72c6aa0c-3846-4057-be15-63d2e9e5f270/util/0.log" Feb 17 00:28:37 crc kubenswrapper[5109]: I0217 00:28:37.292354 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5_72c6aa0c-3846-4057-be15-63d2e9e5f270/pull/0.log" Feb 17 00:28:37 crc kubenswrapper[5109]: I0217 00:28:37.430224 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5_72c6aa0c-3846-4057-be15-63d2e9e5f270/extract/0.log" Feb 17 00:28:37 crc kubenswrapper[5109]: I0217 00:28:37.458913 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5_72c6aa0c-3846-4057-be15-63d2e9e5f270/util/0.log" Feb 17 00:28:37 crc kubenswrapper[5109]: I0217 00:28:37.503378 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f54vs5_72c6aa0c-3846-4057-be15-63d2e9e5f270/pull/0.log" Feb 17 00:28:37 crc kubenswrapper[5109]: I0217 00:28:37.611483 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x_049a8588-d5af-44ed-a85c-77a01850a79d/util/0.log" Feb 17 00:28:37 crc kubenswrapper[5109]: I0217 00:28:37.758568 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x_049a8588-d5af-44ed-a85c-77a01850a79d/util/0.log" Feb 17 00:28:37 crc kubenswrapper[5109]: I0217 00:28:37.821855 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x_049a8588-d5af-44ed-a85c-77a01850a79d/pull/0.log" Feb 17 00:28:37 crc kubenswrapper[5109]: I0217 00:28:37.842220 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x_049a8588-d5af-44ed-a85c-77a01850a79d/pull/0.log" Feb 17 00:28:37 crc kubenswrapper[5109]: I0217 00:28:37.967197 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x_049a8588-d5af-44ed-a85c-77a01850a79d/util/0.log" Feb 17 00:28:37 crc kubenswrapper[5109]: I0217 00:28:37.997147 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x_049a8588-d5af-44ed-a85c-77a01850a79d/extract/0.log" Feb 17 00:28:38 crc kubenswrapper[5109]: I0217 00:28:38.033530 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5t9d2x_049a8588-d5af-44ed-a85c-77a01850a79d/pull/0.log" Feb 17 00:28:38 crc kubenswrapper[5109]: I0217 00:28:38.168215 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb_564af301-6673-4e7b-8882-b923a9df0634/util/0.log" Feb 17 00:28:38 crc kubenswrapper[5109]: I0217 00:28:38.350761 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb_564af301-6673-4e7b-8882-b923a9df0634/pull/0.log" Feb 17 00:28:38 crc kubenswrapper[5109]: I0217 00:28:38.359771 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb_564af301-6673-4e7b-8882-b923a9df0634/pull/0.log" Feb 17 00:28:38 crc kubenswrapper[5109]: I0217 00:28:38.364782 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb_564af301-6673-4e7b-8882-b923a9df0634/util/0.log" Feb 17 00:28:38 crc kubenswrapper[5109]: I0217 00:28:38.493409 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb_564af301-6673-4e7b-8882-b923a9df0634/util/0.log" Feb 17 00:28:38 crc kubenswrapper[5109]: I0217 00:28:38.558212 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb_564af301-6673-4e7b-8882-b923a9df0634/extract/0.log" Feb 17 00:28:38 crc kubenswrapper[5109]: I0217 00:28:38.577739 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xztdb_564af301-6673-4e7b-8882-b923a9df0634/pull/0.log" Feb 17 00:28:38 crc kubenswrapper[5109]: I0217 00:28:38.675324 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v7wnw_d2b9166f-e498-40ed-9e69-9223b30c69e2/extract-utilities/0.log" Feb 17 00:28:38 crc kubenswrapper[5109]: I0217 00:28:38.801139 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v7wnw_d2b9166f-e498-40ed-9e69-9223b30c69e2/extract-utilities/0.log" Feb 17 00:28:38 crc kubenswrapper[5109]: I0217 00:28:38.825944 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v7wnw_d2b9166f-e498-40ed-9e69-9223b30c69e2/extract-content/0.log" Feb 17 00:28:38 crc kubenswrapper[5109]: I0217 00:28:38.852229 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v7wnw_d2b9166f-e498-40ed-9e69-9223b30c69e2/extract-content/0.log" Feb 17 00:28:38 crc kubenswrapper[5109]: I0217 00:28:38.962971 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v7wnw_d2b9166f-e498-40ed-9e69-9223b30c69e2/extract-content/0.log" Feb 17 00:28:39 crc kubenswrapper[5109]: I0217 00:28:39.000349 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v7wnw_d2b9166f-e498-40ed-9e69-9223b30c69e2/extract-utilities/0.log" Feb 17 00:28:39 crc kubenswrapper[5109]: I0217 00:28:39.064257 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rvx97_450c297a-058e-408b-8625-ede5977ddfb1/extract-utilities/0.log" Feb 17 00:28:39 crc kubenswrapper[5109]: I0217 00:28:39.124471 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v7wnw_d2b9166f-e498-40ed-9e69-9223b30c69e2/registry-server/0.log" Feb 17 00:28:39 crc kubenswrapper[5109]: I0217 00:28:39.247512 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rvx97_450c297a-058e-408b-8625-ede5977ddfb1/extract-utilities/0.log" Feb 17 00:28:39 crc kubenswrapper[5109]: I0217 00:28:39.259505 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rvx97_450c297a-058e-408b-8625-ede5977ddfb1/extract-content/0.log" Feb 17 00:28:39 crc kubenswrapper[5109]: I0217 00:28:39.263009 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rvx97_450c297a-058e-408b-8625-ede5977ddfb1/extract-content/0.log" Feb 17 00:28:39 crc kubenswrapper[5109]: I0217 00:28:39.413498 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rvx97_450c297a-058e-408b-8625-ede5977ddfb1/extract-content/0.log" Feb 17 00:28:39 crc kubenswrapper[5109]: I0217 00:28:39.441733 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rvx97_450c297a-058e-408b-8625-ede5977ddfb1/extract-utilities/0.log" Feb 17 00:28:39 crc kubenswrapper[5109]: I0217 00:28:39.464350 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-z95j7_a5c18790-37ee-408b-98c6-feaedc47c512/marketplace-operator/0.log" Feb 17 00:28:39 crc kubenswrapper[5109]: I0217 00:28:39.571667 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rvx97_450c297a-058e-408b-8625-ede5977ddfb1/registry-server/0.log" Feb 17 00:28:39 crc kubenswrapper[5109]: I0217 00:28:39.729349 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lq4st_2b28e0b9-fd8e-4f55-9b5e-d7d74eb6760b/extract-utilities/0.log" Feb 17 00:28:39 crc kubenswrapper[5109]: I0217 00:28:39.859644 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lq4st_2b28e0b9-fd8e-4f55-9b5e-d7d74eb6760b/extract-utilities/0.log" Feb 17 00:28:39 crc kubenswrapper[5109]: I0217 00:28:39.863194 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lq4st_2b28e0b9-fd8e-4f55-9b5e-d7d74eb6760b/extract-content/0.log" Feb 17 00:28:39 crc kubenswrapper[5109]: I0217 00:28:39.872441 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lq4st_2b28e0b9-fd8e-4f55-9b5e-d7d74eb6760b/extract-content/0.log" Feb 17 00:28:39 crc kubenswrapper[5109]: I0217 00:28:39.992488 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lq4st_2b28e0b9-fd8e-4f55-9b5e-d7d74eb6760b/extract-utilities/0.log" Feb 17 00:28:40 crc kubenswrapper[5109]: I0217 00:28:40.011998 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lq4st_2b28e0b9-fd8e-4f55-9b5e-d7d74eb6760b/extract-content/0.log" Feb 17 00:28:40 crc kubenswrapper[5109]: I0217 00:28:40.214906 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lq4st_2b28e0b9-fd8e-4f55-9b5e-d7d74eb6760b/registry-server/0.log" Feb 17 00:28:50 crc kubenswrapper[5109]: I0217 00:28:50.037093 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bbh4j_a1a466bd-accd-4381-b1f0-357d6e20410e/kube-multus/0.log" Feb 17 00:28:50 crc kubenswrapper[5109]: I0217 00:28:50.037845 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bbh4j_a1a466bd-accd-4381-b1f0-357d6e20410e/kube-multus/0.log" Feb 17 00:28:50 crc kubenswrapper[5109]: I0217 00:28:50.059770 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 17 00:28:50 crc kubenswrapper[5109]: I0217 00:28:50.059836 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 17 00:28:52 crc kubenswrapper[5109]: I0217 00:28:52.190479 5109 scope.go:117] "RemoveContainer" containerID="67cfa28822a2f9a6fff59388f803c023da372a8bf6cb4e5789b56bdff2fe08ab" Feb 17 00:28:53 crc kubenswrapper[5109]: I0217 00:28:53.283469 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5f8ff5bdf8-jwn5l_6877d358-6ca8-41b1-8eac-69453394b64b/prometheus-operator-admission-webhook/0.log" Feb 17 00:28:53 crc kubenswrapper[5109]: I0217 00:28:53.299101 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-vvp9n_034835f5-e853-4070-9944-045206b1b990/prometheus-operator/0.log" Feb 17 00:28:53 crc kubenswrapper[5109]: I0217 00:28:53.310547 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5f8ff5bdf8-mgg2t_642eed45-4b50-4af0-9656-27559798d21c/prometheus-operator-admission-webhook/0.log" Feb 17 00:28:53 crc kubenswrapper[5109]: I0217 00:28:53.385085 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-gtfv5_f517e4ec-97f5-4f18-8ea5-f1f3b37dd332/operator/0.log" Feb 17 00:28:53 crc kubenswrapper[5109]: I0217 00:28:53.443013 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-bgfhc_52985de1-df17-4d46-8b16-07bedfc870c0/perses-operator/0.log" Feb 17 00:29:32 crc kubenswrapper[5109]: I0217 00:29:32.939501 5109 generic.go:358] "Generic (PLEG): container finished" podID="7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4" containerID="fdf2b4efc49bba2c8aeb43736fe23dbb9397404d7e083cbbd870d0ec52253f29" exitCode=0 Feb 17 00:29:32 crc kubenswrapper[5109]: I0217 00:29:32.939655 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-472sz/must-gather-dv6t5" event={"ID":"7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4","Type":"ContainerDied","Data":"fdf2b4efc49bba2c8aeb43736fe23dbb9397404d7e083cbbd870d0ec52253f29"} Feb 17 00:29:32 crc kubenswrapper[5109]: I0217 00:29:32.940927 5109 scope.go:117] "RemoveContainer" containerID="fdf2b4efc49bba2c8aeb43736fe23dbb9397404d7e083cbbd870d0ec52253f29" Feb 17 00:29:33 crc kubenswrapper[5109]: I0217 00:29:33.659573 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-472sz_must-gather-dv6t5_7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4/gather/0.log" Feb 17 00:29:39 crc kubenswrapper[5109]: I0217 00:29:39.863370 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-472sz/must-gather-dv6t5"] Feb 17 00:29:39 crc kubenswrapper[5109]: I0217 00:29:39.864303 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-472sz/must-gather-dv6t5" podUID="7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4" containerName="copy" containerID="cri-o://67fb9f059ab9c12cf333d65856c4d75e754e43639e5dede8122d5b956502e481" gracePeriod=2 Feb 17 00:29:39 crc kubenswrapper[5109]: I0217 00:29:39.866774 5109 status_manager.go:895] "Failed to get status for pod" podUID="7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4" pod="openshift-must-gather-472sz/must-gather-dv6t5" err="pods \"must-gather-dv6t5\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-472sz\": no relationship found between node 'crc' and this object" Feb 17 00:29:39 crc kubenswrapper[5109]: I0217 00:29:39.878473 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-472sz/must-gather-dv6t5"] Feb 17 00:29:40 crc kubenswrapper[5109]: I0217 00:29:40.026478 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-472sz_must-gather-dv6t5_7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4/copy/0.log" Feb 17 00:29:40 crc kubenswrapper[5109]: I0217 00:29:40.027142 5109 generic.go:358] "Generic (PLEG): container finished" podID="7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4" containerID="67fb9f059ab9c12cf333d65856c4d75e754e43639e5dede8122d5b956502e481" exitCode=143 Feb 17 00:29:40 crc kubenswrapper[5109]: I0217 00:29:40.270416 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-472sz_must-gather-dv6t5_7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4/copy/0.log" Feb 17 00:29:40 crc kubenswrapper[5109]: I0217 00:29:40.271355 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-472sz/must-gather-dv6t5" Feb 17 00:29:40 crc kubenswrapper[5109]: I0217 00:29:40.273203 5109 status_manager.go:895] "Failed to get status for pod" podUID="7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4" pod="openshift-must-gather-472sz/must-gather-dv6t5" err="pods \"must-gather-dv6t5\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-472sz\": no relationship found between node 'crc' and this object" Feb 17 00:29:40 crc kubenswrapper[5109]: I0217 00:29:40.363914 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4-must-gather-output\") pod \"7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4\" (UID: \"7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4\") " Feb 17 00:29:40 crc kubenswrapper[5109]: I0217 00:29:40.363988 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gv6g\" (UniqueName: \"kubernetes.io/projected/7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4-kube-api-access-6gv6g\") pod \"7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4\" (UID: \"7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4\") " Feb 17 00:29:40 crc kubenswrapper[5109]: I0217 00:29:40.370048 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4-kube-api-access-6gv6g" (OuterVolumeSpecName: "kube-api-access-6gv6g") pod "7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4" (UID: "7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4"). InnerVolumeSpecName "kube-api-access-6gv6g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:29:40 crc kubenswrapper[5109]: I0217 00:29:40.444393 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4" (UID: "7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 17 00:29:40 crc kubenswrapper[5109]: I0217 00:29:40.466205 5109 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 17 00:29:40 crc kubenswrapper[5109]: I0217 00:29:40.466482 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6gv6g\" (UniqueName: \"kubernetes.io/projected/7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4-kube-api-access-6gv6g\") on node \"crc\" DevicePath \"\"" Feb 17 00:29:41 crc kubenswrapper[5109]: I0217 00:29:41.042766 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-472sz_must-gather-dv6t5_7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4/copy/0.log" Feb 17 00:29:41 crc kubenswrapper[5109]: I0217 00:29:41.044170 5109 scope.go:117] "RemoveContainer" containerID="67fb9f059ab9c12cf333d65856c4d75e754e43639e5dede8122d5b956502e481" Feb 17 00:29:41 crc kubenswrapper[5109]: I0217 00:29:41.044185 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-472sz/must-gather-dv6t5" Feb 17 00:29:41 crc kubenswrapper[5109]: I0217 00:29:41.047088 5109 status_manager.go:895] "Failed to get status for pod" podUID="7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4" pod="openshift-must-gather-472sz/must-gather-dv6t5" err="pods \"must-gather-dv6t5\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-472sz\": no relationship found between node 'crc' and this object" Feb 17 00:29:41 crc kubenswrapper[5109]: I0217 00:29:41.082092 5109 status_manager.go:895] "Failed to get status for pod" podUID="7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4" pod="openshift-must-gather-472sz/must-gather-dv6t5" err="pods \"must-gather-dv6t5\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-472sz\": no relationship found between node 'crc' and this object" Feb 17 00:29:41 crc kubenswrapper[5109]: I0217 00:29:41.093529 5109 scope.go:117] "RemoveContainer" containerID="fdf2b4efc49bba2c8aeb43736fe23dbb9397404d7e083cbbd870d0ec52253f29" Feb 17 00:29:41 crc kubenswrapper[5109]: I0217 00:29:41.478027 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4" path="/var/lib/kubelet/pods/7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4/volumes" Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.159567 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29521470-bw5nh"] Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.162773 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4" containerName="copy" Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.162826 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4" containerName="copy" Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.162940 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="62739932-af5d-47b5-b959-41fd3a01ecaf" containerName="oc" Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.162955 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="62739932-af5d-47b5-b959-41fd3a01ecaf" containerName="oc" Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.162977 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4" containerName="gather" Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.162991 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4" containerName="gather" Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.163231 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4" containerName="gather" Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.163248 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="7372ff9a-f7b8-4d7a-85fc-05c9ceae48c4" containerName="copy" Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.163268 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="62739932-af5d-47b5-b959-41fd3a01ecaf" containerName="oc" Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.177125 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521470-8qnq4"] Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.177356 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521470-bw5nh" Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.180447 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.181032 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.183259 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-r4lwp\"" Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.185846 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29521470-bw5nh"] Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.185927 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521470-8qnq4"] Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.186074 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-8qnq4" Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.188524 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.188759 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.217295 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjfq5\" (UniqueName: \"kubernetes.io/projected/ca2eab5b-d547-4e25-ba8f-2fd19e34c483-kube-api-access-jjfq5\") pod \"collect-profiles-29521470-8qnq4\" (UID: \"ca2eab5b-d547-4e25-ba8f-2fd19e34c483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-8qnq4" Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.217378 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca2eab5b-d547-4e25-ba8f-2fd19e34c483-config-volume\") pod \"collect-profiles-29521470-8qnq4\" (UID: \"ca2eab5b-d547-4e25-ba8f-2fd19e34c483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-8qnq4" Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.217443 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ca2eab5b-d547-4e25-ba8f-2fd19e34c483-secret-volume\") pod \"collect-profiles-29521470-8qnq4\" (UID: \"ca2eab5b-d547-4e25-ba8f-2fd19e34c483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-8qnq4" Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.217470 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kvxf\" (UniqueName: \"kubernetes.io/projected/2277c80a-3eb9-40d4-97f9-49fee7c9eb03-kube-api-access-8kvxf\") pod \"auto-csr-approver-29521470-bw5nh\" (UID: \"2277c80a-3eb9-40d4-97f9-49fee7c9eb03\") " pod="openshift-infra/auto-csr-approver-29521470-bw5nh" Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.319084 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca2eab5b-d547-4e25-ba8f-2fd19e34c483-config-volume\") pod \"collect-profiles-29521470-8qnq4\" (UID: \"ca2eab5b-d547-4e25-ba8f-2fd19e34c483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-8qnq4" Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.319203 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ca2eab5b-d547-4e25-ba8f-2fd19e34c483-secret-volume\") pod \"collect-profiles-29521470-8qnq4\" (UID: \"ca2eab5b-d547-4e25-ba8f-2fd19e34c483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-8qnq4" Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.319236 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8kvxf\" (UniqueName: \"kubernetes.io/projected/2277c80a-3eb9-40d4-97f9-49fee7c9eb03-kube-api-access-8kvxf\") pod \"auto-csr-approver-29521470-bw5nh\" (UID: \"2277c80a-3eb9-40d4-97f9-49fee7c9eb03\") " pod="openshift-infra/auto-csr-approver-29521470-bw5nh" Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.319339 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jjfq5\" (UniqueName: \"kubernetes.io/projected/ca2eab5b-d547-4e25-ba8f-2fd19e34c483-kube-api-access-jjfq5\") pod \"collect-profiles-29521470-8qnq4\" (UID: \"ca2eab5b-d547-4e25-ba8f-2fd19e34c483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-8qnq4" Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.321305 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca2eab5b-d547-4e25-ba8f-2fd19e34c483-config-volume\") pod \"collect-profiles-29521470-8qnq4\" (UID: \"ca2eab5b-d547-4e25-ba8f-2fd19e34c483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-8qnq4" Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.332983 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ca2eab5b-d547-4e25-ba8f-2fd19e34c483-secret-volume\") pod \"collect-profiles-29521470-8qnq4\" (UID: \"ca2eab5b-d547-4e25-ba8f-2fd19e34c483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-8qnq4" Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.341341 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kvxf\" (UniqueName: \"kubernetes.io/projected/2277c80a-3eb9-40d4-97f9-49fee7c9eb03-kube-api-access-8kvxf\") pod \"auto-csr-approver-29521470-bw5nh\" (UID: \"2277c80a-3eb9-40d4-97f9-49fee7c9eb03\") " pod="openshift-infra/auto-csr-approver-29521470-bw5nh" Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.347043 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjfq5\" (UniqueName: \"kubernetes.io/projected/ca2eab5b-d547-4e25-ba8f-2fd19e34c483-kube-api-access-jjfq5\") pod \"collect-profiles-29521470-8qnq4\" (UID: \"ca2eab5b-d547-4e25-ba8f-2fd19e34c483\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-8qnq4" Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.521441 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521470-bw5nh" Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.548122 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-8qnq4" Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.833240 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521470-8qnq4"] Feb 17 00:30:00 crc kubenswrapper[5109]: W0217 00:30:00.837186 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca2eab5b_d547_4e25_ba8f_2fd19e34c483.slice/crio-a370537999b61335321c4d2bec77a5fca0fffa5003f7796facdb111774820a2a WatchSource:0}: Error finding container a370537999b61335321c4d2bec77a5fca0fffa5003f7796facdb111774820a2a: Status 404 returned error can't find the container with id a370537999b61335321c4d2bec77a5fca0fffa5003f7796facdb111774820a2a Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.839528 5109 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 00:30:00 crc kubenswrapper[5109]: I0217 00:30:00.998764 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29521470-bw5nh"] Feb 17 00:30:01 crc kubenswrapper[5109]: W0217 00:30:01.003439 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2277c80a_3eb9_40d4_97f9_49fee7c9eb03.slice/crio-19b26ca1a01fc80b066fe3fa66326b4637258d17ff47c9c094854fd8214b25c4 WatchSource:0}: Error finding container 19b26ca1a01fc80b066fe3fa66326b4637258d17ff47c9c094854fd8214b25c4: Status 404 returned error can't find the container with id 19b26ca1a01fc80b066fe3fa66326b4637258d17ff47c9c094854fd8214b25c4 Feb 17 00:30:01 crc kubenswrapper[5109]: I0217 00:30:01.261069 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-8qnq4" event={"ID":"ca2eab5b-d547-4e25-ba8f-2fd19e34c483","Type":"ContainerStarted","Data":"8678fee812ea6f600beba5629b7fe19d63f2a5d9febbab40a1bffa746bc24854"} Feb 17 00:30:01 crc kubenswrapper[5109]: I0217 00:30:01.264868 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-8qnq4" event={"ID":"ca2eab5b-d547-4e25-ba8f-2fd19e34c483","Type":"ContainerStarted","Data":"a370537999b61335321c4d2bec77a5fca0fffa5003f7796facdb111774820a2a"} Feb 17 00:30:01 crc kubenswrapper[5109]: I0217 00:30:01.264922 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29521470-bw5nh" event={"ID":"2277c80a-3eb9-40d4-97f9-49fee7c9eb03","Type":"ContainerStarted","Data":"19b26ca1a01fc80b066fe3fa66326b4637258d17ff47c9c094854fd8214b25c4"} Feb 17 00:30:01 crc kubenswrapper[5109]: I0217 00:30:01.284914 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-8qnq4" podStartSLOduration=1.28489395 podStartE2EDuration="1.28489395s" podCreationTimestamp="2026-02-17 00:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 00:30:01.282677432 +0000 UTC m=+1272.614232230" watchObservedRunningTime="2026-02-17 00:30:01.28489395 +0000 UTC m=+1272.616448718" Feb 17 00:30:02 crc kubenswrapper[5109]: I0217 00:30:02.280257 5109 generic.go:358] "Generic (PLEG): container finished" podID="ca2eab5b-d547-4e25-ba8f-2fd19e34c483" containerID="8678fee812ea6f600beba5629b7fe19d63f2a5d9febbab40a1bffa746bc24854" exitCode=0 Feb 17 00:30:02 crc kubenswrapper[5109]: I0217 00:30:02.280442 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-8qnq4" event={"ID":"ca2eab5b-d547-4e25-ba8f-2fd19e34c483","Type":"ContainerDied","Data":"8678fee812ea6f600beba5629b7fe19d63f2a5d9febbab40a1bffa746bc24854"} Feb 17 00:30:03 crc kubenswrapper[5109]: I0217 00:30:03.294737 5109 generic.go:358] "Generic (PLEG): container finished" podID="2277c80a-3eb9-40d4-97f9-49fee7c9eb03" containerID="673c4ad8a933ac5b8edd19bf943c9d8c1bc39e593d418fad4925e430f9d0e414" exitCode=0 Feb 17 00:30:03 crc kubenswrapper[5109]: I0217 00:30:03.294838 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29521470-bw5nh" event={"ID":"2277c80a-3eb9-40d4-97f9-49fee7c9eb03","Type":"ContainerDied","Data":"673c4ad8a933ac5b8edd19bf943c9d8c1bc39e593d418fad4925e430f9d0e414"} Feb 17 00:30:03 crc kubenswrapper[5109]: I0217 00:30:03.609923 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-8qnq4" Feb 17 00:30:03 crc kubenswrapper[5109]: I0217 00:30:03.680424 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ca2eab5b-d547-4e25-ba8f-2fd19e34c483-secret-volume\") pod \"ca2eab5b-d547-4e25-ba8f-2fd19e34c483\" (UID: \"ca2eab5b-d547-4e25-ba8f-2fd19e34c483\") " Feb 17 00:30:03 crc kubenswrapper[5109]: I0217 00:30:03.680468 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca2eab5b-d547-4e25-ba8f-2fd19e34c483-config-volume\") pod \"ca2eab5b-d547-4e25-ba8f-2fd19e34c483\" (UID: \"ca2eab5b-d547-4e25-ba8f-2fd19e34c483\") " Feb 17 00:30:03 crc kubenswrapper[5109]: I0217 00:30:03.680513 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjfq5\" (UniqueName: \"kubernetes.io/projected/ca2eab5b-d547-4e25-ba8f-2fd19e34c483-kube-api-access-jjfq5\") pod \"ca2eab5b-d547-4e25-ba8f-2fd19e34c483\" (UID: \"ca2eab5b-d547-4e25-ba8f-2fd19e34c483\") " Feb 17 00:30:03 crc kubenswrapper[5109]: I0217 00:30:03.681549 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca2eab5b-d547-4e25-ba8f-2fd19e34c483-config-volume" (OuterVolumeSpecName: "config-volume") pod "ca2eab5b-d547-4e25-ba8f-2fd19e34c483" (UID: "ca2eab5b-d547-4e25-ba8f-2fd19e34c483"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 17 00:30:03 crc kubenswrapper[5109]: I0217 00:30:03.685945 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca2eab5b-d547-4e25-ba8f-2fd19e34c483-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ca2eab5b-d547-4e25-ba8f-2fd19e34c483" (UID: "ca2eab5b-d547-4e25-ba8f-2fd19e34c483"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 17 00:30:03 crc kubenswrapper[5109]: I0217 00:30:03.686140 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca2eab5b-d547-4e25-ba8f-2fd19e34c483-kube-api-access-jjfq5" (OuterVolumeSpecName: "kube-api-access-jjfq5") pod "ca2eab5b-d547-4e25-ba8f-2fd19e34c483" (UID: "ca2eab5b-d547-4e25-ba8f-2fd19e34c483"). InnerVolumeSpecName "kube-api-access-jjfq5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:30:03 crc kubenswrapper[5109]: I0217 00:30:03.782384 5109 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ca2eab5b-d547-4e25-ba8f-2fd19e34c483-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 00:30:03 crc kubenswrapper[5109]: I0217 00:30:03.782437 5109 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca2eab5b-d547-4e25-ba8f-2fd19e34c483-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 00:30:03 crc kubenswrapper[5109]: I0217 00:30:03.782456 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jjfq5\" (UniqueName: \"kubernetes.io/projected/ca2eab5b-d547-4e25-ba8f-2fd19e34c483-kube-api-access-jjfq5\") on node \"crc\" DevicePath \"\"" Feb 17 00:30:04 crc kubenswrapper[5109]: I0217 00:30:04.309677 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-8qnq4" Feb 17 00:30:04 crc kubenswrapper[5109]: I0217 00:30:04.309706 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521470-8qnq4" event={"ID":"ca2eab5b-d547-4e25-ba8f-2fd19e34c483","Type":"ContainerDied","Data":"a370537999b61335321c4d2bec77a5fca0fffa5003f7796facdb111774820a2a"} Feb 17 00:30:04 crc kubenswrapper[5109]: I0217 00:30:04.309787 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a370537999b61335321c4d2bec77a5fca0fffa5003f7796facdb111774820a2a" Feb 17 00:30:04 crc kubenswrapper[5109]: I0217 00:30:04.624610 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521470-bw5nh" Feb 17 00:30:04 crc kubenswrapper[5109]: I0217 00:30:04.695773 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kvxf\" (UniqueName: \"kubernetes.io/projected/2277c80a-3eb9-40d4-97f9-49fee7c9eb03-kube-api-access-8kvxf\") pod \"2277c80a-3eb9-40d4-97f9-49fee7c9eb03\" (UID: \"2277c80a-3eb9-40d4-97f9-49fee7c9eb03\") " Feb 17 00:30:04 crc kubenswrapper[5109]: I0217 00:30:04.701049 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2277c80a-3eb9-40d4-97f9-49fee7c9eb03-kube-api-access-8kvxf" (OuterVolumeSpecName: "kube-api-access-8kvxf") pod "2277c80a-3eb9-40d4-97f9-49fee7c9eb03" (UID: "2277c80a-3eb9-40d4-97f9-49fee7c9eb03"). InnerVolumeSpecName "kube-api-access-8kvxf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 17 00:30:04 crc kubenswrapper[5109]: I0217 00:30:04.797671 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8kvxf\" (UniqueName: \"kubernetes.io/projected/2277c80a-3eb9-40d4-97f9-49fee7c9eb03-kube-api-access-8kvxf\") on node \"crc\" DevicePath \"\"" Feb 17 00:30:05 crc kubenswrapper[5109]: I0217 00:30:05.323653 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29521470-bw5nh" event={"ID":"2277c80a-3eb9-40d4-97f9-49fee7c9eb03","Type":"ContainerDied","Data":"19b26ca1a01fc80b066fe3fa66326b4637258d17ff47c9c094854fd8214b25c4"} Feb 17 00:30:05 crc kubenswrapper[5109]: I0217 00:30:05.323732 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19b26ca1a01fc80b066fe3fa66326b4637258d17ff47c9c094854fd8214b25c4" Feb 17 00:30:05 crc kubenswrapper[5109]: I0217 00:30:05.323774 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29521470-bw5nh" Feb 17 00:30:05 crc kubenswrapper[5109]: I0217 00:30:05.712708 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29521464-pzhn2"] Feb 17 00:30:05 crc kubenswrapper[5109]: I0217 00:30:05.723416 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29521464-pzhn2"] Feb 17 00:30:07 crc kubenswrapper[5109]: I0217 00:30:07.481906 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f09f79a5-2bda-4621-97cc-fb9a19d50348" path="/var/lib/kubelet/pods/f09f79a5-2bda-4621-97cc-fb9a19d50348/volumes" Feb 17 00:30:30 crc kubenswrapper[5109]: I0217 00:30:30.800286 5109 patch_prober.go:28] interesting pod/machine-config-daemon-hjvm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:30:30 crc kubenswrapper[5109]: I0217 00:30:30.801131 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" podUID="5867f26a-eddd-4d0b-bfa3-e7c68e976330" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:30:52 crc kubenswrapper[5109]: I0217 00:30:52.358455 5109 scope.go:117] "RemoveContainer" containerID="ddee3129cf41bf9312e30f8f6a22678c27fbc042d7a66610b5915bc56ebf1645" Feb 17 00:31:00 crc kubenswrapper[5109]: I0217 00:31:00.800029 5109 patch_prober.go:28] interesting pod/machine-config-daemon-hjvm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:31:00 crc kubenswrapper[5109]: I0217 00:31:00.800810 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" podUID="5867f26a-eddd-4d0b-bfa3-e7c68e976330" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:31:30 crc kubenswrapper[5109]: I0217 00:31:30.799855 5109 patch_prober.go:28] interesting pod/machine-config-daemon-hjvm4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 00:31:30 crc kubenswrapper[5109]: I0217 00:31:30.800554 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" podUID="5867f26a-eddd-4d0b-bfa3-e7c68e976330" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 00:31:30 crc kubenswrapper[5109]: I0217 00:31:30.800699 5109 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" Feb 17 00:31:30 crc kubenswrapper[5109]: I0217 00:31:30.801633 5109 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2665f35a3de6e3966a805796059ff0ac96ef2591f89e2fbdecfb8369ece08498"} pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 00:31:30 crc kubenswrapper[5109]: I0217 00:31:30.801738 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" podUID="5867f26a-eddd-4d0b-bfa3-e7c68e976330" containerName="machine-config-daemon" containerID="cri-o://2665f35a3de6e3966a805796059ff0ac96ef2591f89e2fbdecfb8369ece08498" gracePeriod=600 Feb 17 00:31:31 crc kubenswrapper[5109]: I0217 00:31:31.287881 5109 generic.go:358] "Generic (PLEG): container finished" podID="5867f26a-eddd-4d0b-bfa3-e7c68e976330" containerID="2665f35a3de6e3966a805796059ff0ac96ef2591f89e2fbdecfb8369ece08498" exitCode=0 Feb 17 00:31:31 crc kubenswrapper[5109]: I0217 00:31:31.287926 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" event={"ID":"5867f26a-eddd-4d0b-bfa3-e7c68e976330","Type":"ContainerDied","Data":"2665f35a3de6e3966a805796059ff0ac96ef2591f89e2fbdecfb8369ece08498"} Feb 17 00:31:31 crc kubenswrapper[5109]: I0217 00:31:31.288346 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hjvm4" event={"ID":"5867f26a-eddd-4d0b-bfa3-e7c68e976330","Type":"ContainerStarted","Data":"e1941af5d9f8bc08104f9f367f1e295ea44d370bede4a60909e70de91d494191"} Feb 17 00:31:31 crc kubenswrapper[5109]: I0217 00:31:31.288383 5109 scope.go:117] "RemoveContainer" containerID="baef34204e4f9db8d16df2c382a110a258b0baf62af5a457595fac5d1746cfb4"